Skip to content

Ch5 Exercises 1 through 5#36

Open
canyon289 wants to merge 4 commits intoaloctavodia:masterfrom
canyon289:ch5_1_5
Open

Ch5 Exercises 1 through 5#36
canyon289 wants to merge 4 commits intoaloctavodia:masterfrom
canyon289:ch5_1_5

Conversation

@canyon289
Copy link
Collaborator

  • Exercise 1 Approved
  • Exercise 2 Approved
  • Exercise 3 Approved
  • Exercise 5 Approved

@canyon289 canyon289 changed the title Ch5 1 5 [WIP] Ch5 Exercises 1 through 5 Jun 30, 2019
@canyon289 canyon289 changed the title [WIP] Ch5 Exercises 1 through 5 Ch5 Exercises 1 through 5 Jul 4, 2019
@aloctavodia
Copy link
Owner

It should be "Chapter 5 Exercises"

exercise 1 and 3

If you use original x_1s instead of linspace things look much more better. Or if you use linspace and then you normalize the new_data.

Something like this for i, (sd, trace) in enumerate(trace_1[:3]):

α_p_post = trace['α'].mean()
β_p_post = trace['β'].mean(axis=0)
x_1p = np.vstack([x_new**i for i in range(1, order_1+1)])
x_new_p = (x_1p - x_1p.mean(axis=1, keepdims=True)) / x_1p.std(axis=1, keepdims=True)

y_p_post = α_p_post + np.dot(β_p_post, x_new_p) 
plt.plot(x_new_p[0], y_p_post, f'C{i}',
         label=f'model order {order_1} sd-{sd}',
         alpha=.5)

plt.scatter(x_1s[0], y_1s, c='C0', marker='.')
plt.legend();

exercise 2

This is probably my mistake I guess at some point I had a block of code generating dummy_data
you can create more samples using a bootstraping + noise approach

idx = np.random.choice(range(len(dummy_data)), size=500)
dummy_data_500 = np.random.normal(dummy_data[idx], dummy_data.std(0) / 10)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants