A few years ago, when I was teaching math at a big state university, a colleague told me the following.

She was comparing notes with a professor at a nearby school on how their respective real analysis courses were going. She told him that they had just proved that the square root of two was an irrational number. He laughed and said she was way ahead of him; his class had just proved that the square root of two is a number.

[Don't feel bad if you don't get it. This is not the sort of thing that normal people talk about.]

The joke was that, while it may sound impressive, showing that the square root of two is irrational is fairly easy. There's a nice, elegant little proof that is easy to explain and is suitable for anyone who has completed the first few sections of high school algebra. On the other hand, showing that a real number X exist such that X squared equals two is actually a bit of a challenge.

There is one other point here which ties in to our ongoing math curriculum thread. Namely that, with the borderline exception of high school geometry (and even there we cheat a little), a truly rigorous approach to lower level mathematics is wildly impractical. The order in which concepts are needed does not match up at all with the order of difficulty of proof (the fundamental theorem of calculus comes to mind). Therefore much (probably most) of what we tell students is back up with no more than a "trust us." We don't have to like this but we do have to acknowledge it.

This doesn't mean that proofs aren't important, but that the importance lies in the process and not in the result.

# You Do the Math -- K thru Calculus

A blog of tips and recommendations for anyone interested in learning or teaching mathematics.

## Thursday, July 2, 2015

## Monday, June 29, 2015

### Eureka Math Tips for Parents -- worst SAT prep question ever

In the last installment (click here), I pointed to some extremely formal and difficult to read definitions such as this:

If you can pull it off, there's something to be said for the axiomatic approach, having each statement emerge naturally from the postulates and theorems that came before. There is also something to be said for learning to formulate closely reasoned, exactly worded arguments. That's an experience probably best deferred until after puberty, but still...

What we have here, though, is something entirely different. The difficult technical language doesn't actually lead to anything. Students are dragged through these incredibly dense and confusing set-ups only to be have them followed by something like this (from the same lesson) [emphasis added]

I suspect that the authors were trying to model their problem after something like this:

I didn't bother to label these but you get the idea. It is easy to come up with examples where corresponding sides are in proportion but the figures are not similar as long as there are more than three sides to the polygon (and, of course, if the polygons have the same number of sides).

Regardless of what they were trying to do, the authors had a very weak grasp of the concept of corresponding sides and that's a pretty scary thought given that they were writing a lesson on similarity.

Similarity Transformation: A similarity transformation, or similarity, is a composition of a finite number of basic rigid motions or dilations. The scale factor of a similarity transformation is the product of the scale factors of the dilations in the composition; if there are no dilations in the composition, the scale factor is defined to be 1.I argued that this level of rigor was inappropriate for eighth grade math, but that's only half the story. It is possible to make a case for teaching junior high kids using the language and structure of an abstract algebra class. Not a good case, in my opinion, but not a nonsensical one either.

If you can pull it off, there's something to be said for the axiomatic approach, having each statement emerge naturally from the postulates and theorems that came before. There is also something to be said for learning to formulate closely reasoned, exactly worded arguments. That's an experience probably best deferred until after puberty, but still...

What we have here, though, is something entirely different. The difficult technical language doesn't actually lead to anything. Students are dragged through these incredibly dense and confusing set-ups only to be have them followed by something like this (from the same lesson) [emphasis added]

No, even thoughwe could say that the corresponding sides are in proportion, there exists no single rigid motion or sequence of rigid motions that would map a four-sided figure to a three-sided figure. Therefore, the figures do not fulfill the congruence part of the definition for similarity, and Figure A is not similar to Figure A′.

God, this is maddening. You can't say that "the corresponding sides are in proportion"; you can't say ANYTHING about the corresponding sides here. The term is meaningless when comparing a rectangle and a triangle.

From Wikipedia:

If you use this or any other kind of standard definition, that statement that has sides of a rectangle correspond to sides of a triangle is gibberish.From Wikipedia:

In geometry, the tests for congruence and similarity involve comparing corresponding sides of polygons. In these tests, each side in one polygon is paired with a side in the second polygon, taking care to preserve the order of adjacency.

I suspect that the authors were trying to model their problem after something like this:

I didn't bother to label these but you get the idea. It is easy to come up with examples where corresponding sides are in proportion but the figures are not similar as long as there are more than three sides to the polygon (and, of course, if the polygons have the same number of sides).

Regardless of what they were trying to do, the authors had a very weak grasp of the concept of corresponding sides and that's a pretty scary thought given that they were writing a lesson on similarity.

## Sunday, June 28, 2015

### Eureka Math Tips for Parents -- well, that clears up everything

Back to the language of Eureka Math thread.

I previously made the distinction between inaccuracies that come from oversimplifying a concept (particularly by using less precise, nontechnical language) and those that come from not understanding that concept. The first type is regrettable but occasionally unavoidable (at least in lower level classes); there is no real excuse for the second.

When writers make mistakes while explaining technical concepts with non-technical language, it can be often be difficult to decide which kind of mistake they're making (think Malcolm Gladwell). If, on the other hand, the writers drag the readers through impenetrable technical explanations and still get things wrong...

We've already established that there are lots of mistakes in the Eureka Math materials (with more examples to come), but it's important to note that the mistakes we are talking about do not come from oversimplifying the concepts or making them accessible to younger learners. Instead the lessons are often filled with formal and painfully dense mathematical explanations.

Here's an example from the parent section of Eureka Math ("a suite of tools that will help you to help your child learn more"). Keep in mind, the target audience is parents who are having trouble with the homework their eighth graders are bringing home.

Check this one out.

Here's a more readable version (at least in terms of font size):

If you're reading this, it's likely you're a math person, and there's a good chance you've taken classes like abstract algebra and real analysis. If so, I'd like you to try a bit of sympathetic imagination and put yourself in the place of someone who struggles to understand the eighth grade math homework his or her children bring home. Now go back and read this section.

I can't imagine anyone who actually needs help getting anything out of this, while the people who can follow these definitions could probably do a better job writing them on their own.

[Later in the same PDF, the authors make an appalling mistake explaining corresponding sides, but that's a topic for another post.]

I previously made the distinction between inaccuracies that come from oversimplifying a concept (particularly by using less precise, nontechnical language) and those that come from not understanding that concept. The first type is regrettable but occasionally unavoidable (at least in lower level classes); there is no real excuse for the second.

When writers make mistakes while explaining technical concepts with non-technical language, it can be often be difficult to decide which kind of mistake they're making (think Malcolm Gladwell). If, on the other hand, the writers drag the readers through impenetrable technical explanations and still get things wrong...

We've already established that there are lots of mistakes in the Eureka Math materials (with more examples to come), but it's important to note that the mistakes we are talking about do not come from oversimplifying the concepts or making them accessible to younger learners. Instead the lessons are often filled with formal and painfully dense mathematical explanations.

Here's an example from the parent section of Eureka Math ("a suite of tools that will help you to help your child learn more"). Keep in mind, the target audience is parents who are having trouble with the homework their eighth graders are bringing home.

Check this one out.

Here's a more readable version (at least in terms of font size):

Dilation: A transformation of the plane with center O and scale factor r(r > 0). IfThis wouldn't be A-level work in a graduate math class (I had to read through this a couple of times before I realized that O was supposed to be the center of dilation. Dangling modifiers are a bad idea in a formal definition), but the language would be appropriate. For the target audience, though, you might as well be speaking Sanskrit.

D(O) = O and if P ≠ O, then the point D(P), to be denoted by Q, is the point on the ray OP so that |OQ| = r|OP|. If the scale factor r ≠ 1, then a dilation in the coordinate plane is a transformation that shrinks or magnifies a figure by multiplying each coordinate of the figure by the scale factor.

Congruence: A finite composition of basic rigid motions—reflections, rotations,

translations—of the plane. Two figures in a plane are congruent if there is a congruence that maps one figure onto the other figure.

Similar: Two figures in the plane are similar if a similarity transformation exists, taking one figure to the other.

Similarity Transformation: A similarity transformation, or similarity, is a composition of a finite number of basic rigid motions or dilations. The scale factor of a similarity transformation is the product of the scale factors of the dilations in the composition; if there are no dilations in the composition, the scale factor is defined to be 1.

Similarity: A similarity is an example of a transformation.

If you're reading this, it's likely you're a math person, and there's a good chance you've taken classes like abstract algebra and real analysis. If so, I'd like you to try a bit of sympathetic imagination and put yourself in the place of someone who struggles to understand the eighth grade math homework his or her children bring home. Now go back and read this section.

I can't imagine anyone who actually needs help getting anything out of this, while the people who can follow these definitions could probably do a better job writing them on their own.

[Later in the same PDF, the authors make an appalling mistake explaining corresponding sides, but that's a topic for another post.]

## Thursday, June 25, 2015

### Problems with Common Core and EngageNY -- statistics edition

Last week we came back to the topic of Common Core standards thanks to this extraordinary post by Gary Rubinstein which uncovered some appalling quality control issues.

The problems involved EngageNY (arguably the gold standard in CC based lesson plans). Rubinstein focused on algebra; I decided to check out the sections on statistics. What I found was uniformly bad. I’m going to focus one section [Lesson 30: Evaluating Reports Based on Data from an Experiment], but the general concerns apply to all of the sections I looked at.

When explaining a highly technical subject to younger students, we sometimes go too far in an effort to smooth off the edges. We lose precision trying to stick with everyday language and we leave out important details because they greatly complicate the picture. When we try to communicate scientific concepts, there will always be a trade-off between being accurate and being understandable.

This is invariably a judgment call. What's more, it is a judgment call that varies from subject to subject and from audience to audience. We can argue about where exactly to make the cut, but we can’t really say one position is right and the other is wrong.

That’s not what we’re talking about with EngageNY. The authors like to throw in impressive-sounding scientific language and wordy constructions but not in a way that makes the writing more precise.

For example:

At best, these lessons are sloppy; at worst, they’re wrong. Take this for example:

“Tends to improve” implies a causal relationship, as does “help” in this context. The authors appear to have confused “causal” with “deterministic.”

The quality issues we see associated with the implementation of Common Core bear a striking resemblance to the problems noted by Richard Feynman when critiquing the New Math reforms of the Sixties.

The problems involved EngageNY (arguably the gold standard in CC based lesson plans). Rubinstein focused on algebra; I decided to check out the sections on statistics. What I found was uniformly bad. I’m going to focus one section [Lesson 30: Evaluating Reports Based on Data from an Experiment], but the general concerns apply to all of the sections I looked at.

When explaining a highly technical subject to younger students, we sometimes go too far in an effort to smooth off the edges. We lose precision trying to stick with everyday language and we leave out important details because they greatly complicate the picture. When we try to communicate scientific concepts, there will always be a trade-off between being accurate and being understandable.

This is invariably a judgment call. What's more, it is a judgment call that varies from subject to subject and from audience to audience. We can argue about where exactly to make the cut, but we can’t really say one position is right and the other is wrong.

That’s not what we’re talking about with EngageNY. The authors like to throw in impressive-sounding scientific language and wordy constructions but not in a way that makes the writing more precise.

For example:

Students should look to see if the article explicitly states that the subjects were randomly assigned to each treatment group. This is important because random assignment negates the effects of extraneous variables that may have an effect on the response by evenly distributing these variables into both treatment groups.“[N]egates the effects of extraneous variables that may have an effect” is not a phrase that the typical high school student will find particularly informative, but this paragraph also manages to be not-quite right. That “evenly” seems to suggest that the distribution (rather than the expected distributions) of non-treatment variables will be identical, while the part about “distributing” variables just seems odd.

At best, these lessons are sloppy; at worst, they’re wrong. Take this for example:

Suppose newspaper reporters brainstormed some headlines for an article on this experiment. These are their suggested headlines:

A. “New Treatment Helps Pericarditis Patients”

B. “Colchicine Tends to Improve Treatment for Pericarditis”

C. “Pericarditis Patients May Get Help”

7. Which of the headlines above would be best to use for the article? Explain why.

Headline A would be the best because this is a well-designed experiment. Therefore, a cause and effect relationship has been established. Headlines B and C talk about a tendency relationship, not a cause and effect relationship.

“Tends to improve” implies a causal relationship, as does “help” in this context. The authors appear to have confused “causal” with “deterministic.”

The quality issues we see associated with the implementation of Common Core bear a striking resemblance to the problems noted by Richard Feynman when critiquing the New Math reforms of the Sixties.

The reason was that the books were so lousy. They were false. They were hurried. They would try to be rigorous, but they would use examples (like automobiles in the street for "sets") which were almost OK, but in which there were always some subtleties. The definitions weren't accurate. Everything was a little bit ambiguous -- they weren't smart enough to understand what was meant by "rigor." They were faking it. They were teaching something they didn't understand, and which was, in fact, useless, at that time, for the child.

## Wednesday, June 17, 2015

### More fun with Common Core and exponents

After the last post, I decided to follow some of the links Gary Rubinstein provided to see if the other lessons covering exponents were as bad as the examples he chose.

See for yourself.

In case the print is too small...

See for yourself.

In case the print is too small...

## Tuesday, June 16, 2015

### Common Core -- "and the brains of Isadora Duncan"

Mathematics educator and blogger Gary Rubinstein has been doing some extraordinarily important work on the Common Core beat. There's a lot of confusion about the boundaries of the discussion, and they have a way of shifting between and sometimes even within arguments.

A great deal of that shifting centers around exactly what constitutes Common Core. Proponents will often fall back to the "just a set of standards" definition and claim that criticisms involving implementation are invalid. That's a difficult position to hold since the standards without implementation aren't that meaningful, particularly considering that schools already have state or district standards in place that aren't all that different from Common Core.

A much more tenable defense is that the bad examples circulating online aren't really aligned with Common Core and we should wait until there's a faithful implementation before judging. This is the argument Rubinstein goes after, starting with this analysis by Education Week.

[Eureka Math is more or less the same as EngageNY. I'm still looking for an official statement but you can pretty much use them interchangeably.]

I volunteer with an after school tutoring program here in LA.in a program that largely focuses on language arts, my role is designated math guy. I go from table to table answering any algebra/geometry/calculus questions that that the regular tutors can't handle. Sometimes this is because the tutors have forgotten what similar triangles are or how to apply the quadratic formula, but just as often, the problem has less to do with math and more to do with Common Core. It is not unusual for a volunteer who happens to be an engineer or a computer scientist to wave me over because he or she has no idea what the question is asking. In these cases, I noticed that the EngageNY logo often appeared at the bottom of the worksheet.

Rubinstein has dug deeply into the EngageNY materials being given to students and what he has found is not good.

And from the 8th grade EngageNY teacher’s edition

According to Rubinstein, this mistake is made eighteen out of the twenty times the topic is addressed.

I'll be coming back to this later but for now I'm going to close with a couple of paragraphs from the my Monkey Cage piece on Common Core and the New Math of the Sixties.

A much more tenable defense is that the bad examples circulating online aren't really aligned with Common Core and we should wait until there's a faithful implementation before judging. This is the argument Rubinstein goes after, starting with this analysis by Education Week.

[Eureka Math is more or less the same as EngageNY. I'm still looking for an official statement but you can pretty much use them interchangeably.]

I volunteer with an after school tutoring program here in LA.in a program that largely focuses on language arts, my role is designated math guy. I go from table to table answering any algebra/geometry/calculus questions that that the regular tutors can't handle. Sometimes this is because the tutors have forgotten what similar triangles are or how to apply the quadratic formula, but just as often, the problem has less to do with math and more to do with Common Core. It is not unusual for a volunteer who happens to be an engineer or a computer scientist to wave me over because he or she has no idea what the question is asking. In these cases, I noticed that the EngageNY logo often appeared at the bottom of the worksheet.

Rubinstein has dug deeply into the EngageNY materials being given to students and what he has found is not good.

First of all, some lessons are full of errors. Second, some lessons are unnecessarily boring, and third, some lessons are unnecessarily confusing.

I should note that I have not gone through every module in every grade. I also did not search through to cherry pick examples that were particularly bad. I just randomly picked some important topics to see how they covered them and either I just happened to find the only four bad lessons in my first four tries or there are so many flawed lessons in this project that randomly selecting a bad one is quite likely. It’s a bit like evaluating a singer and the first few songs you listen to are out of tune. How many more do you have to listen to before you can safely assume that this is not someone with a lot of talent?

Exhibit A is the first lesson in the first module for 8th grade, exponents. On the second page, they introduce the concept of raising a negative number to a positive integer. Every real math teacher knows that there is a difference between the two expressions (-2)^4 and -2^4. The first one means (-2)*(-2)*(-2)*(-2)=+16 while the second one, without the parentheses around the -2 means -1*2*2*2*2=-16. I have checked with all the math teachers I know, and none have ever seen -2^4 interpreted as (-2)^4. Yet, here all over lesson one module one for 8th grade EngageNY teacher’s edition, we see this mistake.

And from the 8th grade EngageNY teacher’s edition

According to Rubinstein, this mistake is made eighteen out of the twenty times the topic is addressed.

I'll be coming back to this later but for now I'm going to close with a couple of paragraphs from the my Monkey Cage piece on Common Core and the New Math of the Sixties.

One of the best summaries of these criticisms came from Pólya, who alluded to the famous, though probably apocryphal, story of Isadora Duncan suggesting to George Bernard Shaw that they should have a child because it would have her beauty and his brains, to which Shaw is supposed to have replied that it could well have her brains and his beauty.The sad part is this time they found someone with far less mathematical understanding than the high school teachers.

Pólya suggested that new math was somewhat analogous to Duncan’s proposal. The intention had been to bring mathematical researchers and high school teachers together so that the new curriculum would combine the mathematical understanding of the former and the teaching skills of the latter, but the final product got it the other way around.

Subscribe to:
Posts (Atom)