The document describes various probability distributions that can arise from combining Bernoulli random variables. It shows how a binomial distribution emerges from summing Bernoulli random variables, and how Poisson, normal, chi-squared, exponential, gamma, and inverse gamma distributions can approximate the binomial as the number of Bernoulli trials increases. Code examples in R are provided to simulate sampling from these distributions and compare the simulated distributions to their theoretical probability density functions.
157. xi µ
p
n
xi µ
p
n
Mxi µ
p
n
(t) = E[e
xi µ
p
n
t
]
= 1 + 2 t2
2!n
+ µ3
t3
3!n3/2
+ · · · + µk
tk
k!nk/2
+ · · ·
= 1 +
2
t2
2n
+
n
2n
=
1
2n
n n ! 0 n ! 0
= 1 +
2
t2
+ n
2n
158. T0
=
x1 µ
p
n
+
x2 nµ
p
n
+ · · · +
xn µ
p
n
=
nX
i=1
xi µ
p
n
MT 0 (t) = MPn
i=1
⇣
xi µ
p
n
⌘(t) = E[e
Pn
i=1
⇣
xi µ
p
n
⌘
t
]
=
nY
i=0
E[e
⇣
xi µ
p
n
⌘
t
] =
✓
1 +
1
n
2
t2
+ n
2
◆n
er
⌘ lim
n!1
⇣
1 +
r
n
⌘n
r
r
= lim
n!1
⇣
1 +
r
n
⌘n
159. n ! 1
lim
n!1
MT 0 = lim
n!1
✓
1 +
1
n
2
t2
+ n
2
◆n
= e
2t2
2
lim
n!1
n = 0
N(0, 2
)
T0
=
T nµ
p
n
2
185. u ⇠ N(0, 1)
t =
u
p
v/m
v ⇠ 2
(m)
f(t) =
m+1
2
p
m⇡ m
2
✓
t2
m
+ 1
◆ m+1
2
186. u ⇠ N(0, 1) v ⇠ 2
(m) v > 01 < u < +1
f(u, v) =
1
p
2⇡
exp
✓
u2
2
◆
(1/2)n/2
(n/2)
vn/2 1
e v/2
t =
u
p
v/m
x = v
f(t) =
m+1
2
p
m⇡ m
2
✓
t2
m
+ 1
◆ m+1
2
(z) =
Z 1
0
tz 1
e t
dt
187. µ
D = (x1, · · · , xn) xi ⇠ N(µ, 2
)
¯x ⇠ N(µ, 2
/n)¯x
1
2
nX
i=1
(xi ¯x)2
⇠ 2
n 1
188. u =
¯x µ
/
p
n
⇠ N(0, 1) v =
1
2
nX
i=1
(xi ¯x)2
⇠ 2
n 1
t =
u
p
v/(n 1)
=
¯x µ
/
p
n
·
"
1
2
1
(n 1)
nX
i=1
(xi ¯x)2
# 1/2
=
¯x µ
1/
p
n
·
1
p
s2
=
¯x µ
s/
p
n
⇠ tn 1
s2
=
1
n 1
nX
i=1
(xi ¯x)2
s2
189. P
✓
tn 1;↵/2 5
¯x µ
s/
p
n
5 tn 1;↵/2
◆
= 1 ↵
tn 1;↵/2 tn 1;↵/2
↵/2 ↵/2
1 ↵
1 ↵
1 ↵
P
✓
¯x tn 1;↵/2
s
p
n
5 µ 5 ¯x + tn 1;↵/2
s
p
n
◆
= 1 ↵
[ tn 1;↵/2, tn 1;↵/2]
µ
1 ↵
190. P
✓
tn 1;↵/2 5
¯x µ
s/
p
n
5 tn 1;↵/2
◆
= 1 ↵
tn 1;↵/2 tn 1;↵/2
↵/2 ↵/2
1 ↵
1 ↵
1 ↵
P
✓
¯x tn 1;↵/2
s
p
n
5 µ 5 ¯x + tn 1;↵/2
s
p
n
◆
= 1 ↵
[ tn 1;↵/2, tn 1;↵/2]
µ
1 ↵