Let's try some interesting formulas. Notice how the ascii input closely matches (a linear version) of the formatted output.



Pythagorean theorem: `a^2+b^2=c^2`

`AA x in CC (sin^2x+cos^2x=1)`

Dijkstra style: `(AA x: x in CC: sin^2x+cos^2x=1)`


Definition of the Riemann integral: If `f` is continuous on the interval `(a,b)`, except perhaps at finitely many points, then `int_a^b f(x)dx=lim_(n->oo)sum_[i=1]^n f(x_i^(**))Delta x` where `Delta x=(b-a)/n`, `x_i=a+iDeltax` and `x_i^(**)in[x_[i-1],x_i]`.

And then after extending the Riemann integral to improper integrals (unbounded domains) you get the magic formula

$\int_0^oo e^{-x^2}dx = 1/2\sqrt{pi}.$

`x/x=(1 if x!=0)`

`{:|x|:}={(x, if x>=0), (-x, if xlt0):}`

`{:|{:(a,b),(c,d):}|:}=ad-bc` (to get the spacing right around the |...|, we need an extra pair of invisible brackets {:   :})

`0 = {:|A - lambdaI|:} = {:|{:(3 - lambda,1),(1, 3 - lambda):}|:} = lambda^2 - 6lambda + 8 = (lambda - 2)(lambda - 4)`, so `lambda_1 = 2` and `lambda_2 = 4`.


$S_E^\text{lattice}=-\sum_{n, \mu}\hat{\phi}_n\hat{\phi}_{n+\hat{\mu}}+\frac{2+\hat{m}^2}{2}\sum_n\hat{\phi}_n\hat{\phi}_n$

z-statistic: `z=(x-mu)/(sigma/sqrt(n))` or `z=(x-mu)/(sigma//sqrt(n))`

`int_0^pi sinxdx=-cosx]_0^pi=-cospi-(-cos0)=-(-1)-(-1)=2`

Decimal numbers (right-click on the expression to see the MathML code): `epsilon=.001 quad h=-.01 quad pi~~3.14159 quad` weird number `-0.123.456` and dot product `u.v`

`RR = uuu_{n=0}^oo[-n,n]` and `{0} = nnn_{n=1}^oo(- 1/n,1/n)`

`^^^_{i=1}^nphi_i = phi_1 ^^ phi_2 ^^ cdots ^^ phi_n` and `vvv_{i=1}^nphi_i = phi_1 vv phi_2 vv cdots vv phi_n`

$Q = \frac{A\,\Delta T}{\frac{\Delta_1 x}{K_1} + \frac{\Delta_2 x}{K_2} + \frac{\Delta_3 x}{K_3}+ \cdots}$

Constants `pi~~3.141592653589793`

Vectors and matrices `(a_1,...,a_n)` `((a_11,cdots,a_{1p}),(vdots,ddots,vdots),(a_{n1},cdots,a_{np}))` `||bb a||`

Expressions `a+b, a-b, a*b, a/b, a**b, a!,` `text{div}(a,b), mod(a,b)` `a*(b+c)`

Trigonometric functions `sin(x), cos(x), tan(x)`, hyperbolic functions `sinh(x), cosh(x), tanh(x)`, logarithm and exponential `log(x), "exp"(x)`

Derivation `d/(dx)(f(x))`, integration `int_a^bf(x)dx`

Vector algebra `grad_{x_1,...,x_n}(f(x_1,...,x_n))`

Scalar and vector product `(a_1,...,a_n).(b_1,...,b_n)` `(a_x,a_y,a_z)xx(b_x,b_y,b_z)`

Better to use cdot for the scalar product: `(a_1,...,a_n)cdot(b_1,...,b_n)`

Matrix product `((a_11,cdots,a_{1k}),(vdots,ddots,vdots),(a_{n1},cdots,a_{nk}))((b_11,cdots,b_{1p}),(vdots,ddots,vdots),(b_{k1},cdots,b_{kp}))`

Vector `hat x_0`, `vec x`, `bb v`

Reynold's Number `Re=(\rhoVD)/\mu`

Traditional Chinese 繁體中文

Note that a-2 renders as `a-2`, the minus acting as unary, whilst a - 2 solves that (`a - 2`). It would be nice for the parser to predict which is being used. I'm not sure how, though.

A possible heuristic would be: if the - is preceeded by a letter or digit or ),],} then it is a binary minus, else it is unary.

`((d/dx)^2+(d/dy)^2+(d/dz)^2 + V(x,y,z))*psi(x,y,z,t) = E*psi(x,y,z,t)

`int_-1^1 sqrt(1-x^2)dx = pi/2`

`lim_(x->a) f(x)=l <=> AA epsi > 0 EE delta > 0 : 0 < {:|x-a|:} < delta => {:|f(x) - l|:} < epsi`

1. In a game between two equal teams, the home team wins any game with probability `p > 1/2`. In a best of three playoff series, a team with the home advantage has a game at home, followed by a game away, followed by a home game if necessary. The series is over as soon as one team wins two games. What is `P[H]`, the probability that the team with the home advantage wins the series? Is the home advantage increased by playing a three-game series rather than one-game playoff? That is, is it true that `P[H]>= P` for all `p>= 1/2`?

2. Random variable K has a Poisson (`\alpha`) distribution. Derive the properties `E[K]=Var[K]=\alpha`. Hint: `E[K^2] = E[K(K-1)] +E[K]`.

3.With probability 0.7, the toss of an Olympic shotputter travels `D=60+X` ft, where `X` is an exponential random variable with expected value `\mu = 10`. Otherwise, with probability 0.3, a foul is committed by stepping outside of the shot-put circle and we say `D=0`. What are the CDF and PDF of random variable D?

4. Random variable `X` and `Y` have the joint PMF

`P_(X,Y)(x,y)={(cxy, x = 1;2;4 and y=1;3),(0,otherwise):}`

(a) What is the value of the constant `c`?

(b) What is P[Y<X]?

(c) What is P[Y>X]?

(d) What is P[Y=X]?

(e) What is the P[Y=3]?

5. Given the set `{U_1,...,U_n}` of iid uniform `(0,T)` random variables, we define

`X_k = "small"_k(U_1,....U_n)`

as the `k^("th")` "smallest" element of the set. That is, `X_1` is the minimum element, `X_2` is the second smallest element of `{U_1,...,U_n}`. Note that `X_1,...,X_n` are known as the order statistics of `U_1,...,U_n`. Prove that

`f_{X_1,...,X_n}(x_1,...,x_n)= {({n!}/{T^n}, 0<= x_1<...<x_n<=T),(0, otherwise) :}`

6. Suppose in the disk drive factory in Example 8.8 of the text, we can observe `K`, the number of failed devices out of a `n` devices tested. As in the example, let `H_i` denote the hypothesis that the failure rate is `q_i`.

(a) Assuming `q_0 <q_1`, what is the ML hypothesis test based on an observation of K?

(b) What are the conditional probabilities of error `P_(FA) = P[A_1 | H_0]` and `P_{MISS} = P{A_0 | H_1]`? Calculate these probabilities for `n=500, q_0 = 10^{-4}, q_1=10^{-2}`.

(c) Compare this test to that considered in Example 8.8. Which test is more reliable? Which test is easier to implement?

7. The random variable `X` and `Y` have the joint probability density function

`f_{X,Y}(x,y) = {(2(y+x), 0<=x<=y<=1),(0, otherwise) :}`

What is `X_L(Y)`, the linear minimum mean square error estimate of `X` and `Y`?

The WKB solution of the equation

$\ddot\chi_k + \omega_k^2(t)\chi_k=0$, where $\omega_k^2(t)=a_k + b_k \sin^2(t)$ is given by

$\chi_k(t)= \frac{\alpha_k}{\sqrt{2\omega_k(t)}}\exp(-i\int_0^t\omega_k(s)d s)+\frac{\beta_k}{\sqrt{2\omega_k(t)}}\exp(i\int_0^t\omega_k(s)d s)$

where $\alpha_k$ and $\beta_k$ are constants, such that the relations hold

$| \omega_k^{-1}\frac{d}{d t}\ln \omega_k| < < 1$

$| \omega_k^{-1}\frac{d}{d t}(\omega_k^{-1}\frac{d}{d t}\ln \omega_k)|< < 1$

asciimathml is just a toy: ${\bf X} = \left[\begin{array}{cc} 1 & 2 \\ 3 & 4 \end{array}\right]$

It handles only a small subset of LaTeX, the main focus is on a simplified syntax: `bbX = [(1,2),(3,4)]`