*This is part of the occasional series of guest blog posts on Launchings. Others who wish to submit a guest blog post for consideration should contact bressoud@macalester.edu.*

This guest blog is to promote a new article, titled “Connecting the Stakeholders: Departments, Policy, and Research in Undergraduate Mathematics Education,” which comes out of the Progress through Calculus (PtC) research project. Some housekeeping and context information first: PtC is an NSF funded (DUE No. 1430540) project run in conjunction with the MAA, and is in many ways a continuation of the earlier *Characteristics of Successful Programs in College Calculus* (CSPCC; aka the MAA National Study of College Calculus). This new article has been published in the journal PRIMUS, which is now available as part of membership with the MAA (more on that here).

Now, the paper!

For years, attention and concern have been focused on introductory post secondary mathematics courses. These courses are traditionally thought of as part of the gateway to STEM degrees, and their ubiquity makes them a key factor in students’ post secondary education experiences. High enrollment in these courses across the country makes them a common target for administrators and others hoping to impact retention and graduation rates, particularly in STEM fields. Thus, introductory mathematics courses - particularly precalculus and calculus - are a topic of conversation among members of mathematics departments, researchers of undergraduate mathematics education, and policy makers at multiple levels (and others). However, these conversations are not always on the same plane.

University of Saint Thomas, Saint Paul, Minnesota, site of the MAA-sponsored conference on *Precalculus to Calculus: Insights & Innovations*

Our experience hosting a conference of (roughly) one hundred people with different relationships to those three communities provided us with a clear window into some of these discrepancies. The *Precalculus to Calculus: Insights and Innovations* conference was hosted at the University of Saint Thomas in Saint Paul, Minnesota in 2016 as part of the PtC and CSPCC projects, with support from the MAA. This conference, and the resulting paper, are part of continuing efforts to connect researchers, practitioners, and policy-makers in the arena of undergraduate mathematics education. We choose to organize and share our observations publicly in the hopes of furthering the ongoing conversation and supporting the collaborations which are critical for widespread and sustainable improvement.

The conference was organized into sessions, each introduced by a panel but consisting mainly of attendee discussions facilitated by project team members. This was a purposeful attempt to engage participants in conversation and innovation. The active and engaged (participant-centered, one might say) model allowed us to **listen** to our participants as they **engaged** in meaningful conversations, and to record **emerging** themes in those conversations (we try to walk the walk…). Our paper highlights four main themes:

Myriad definitions, operationalizations, and views of “active learning” and how that inconsistency is problematic

A lack of clear strategies for supporting

*each and every*student in mathematics in ways that redress systemic inequities and acknowledge individualized experiencesThe content of precalculus/calculus courses and the systems for placing new students into appropriate courses in the sequence

Connections between program goals, strategies for reaching those goals, and assessments

We hope that this paper will further facilitate the ongoing conversations between stakeholders (and spark new ones!) to better support student success in mathematics and beyond.

**Reference**

Apkarian, N., Kirin, D., Gehrtz, J., & Vroom, K. (2019). Connecting the stakeholders: Departments, policy, and research in undergraduate mathematics education. *PRIMUS*. https://doi.org/10.1080/10511970.2019.1629135

*Naneh Apkarian is a post-doctoral researcher at the Center for Research on Instructional Change in Postsecondary Education at Western Michigan University*

*Jessica Gehrtz is a post-doctoral researcher at the Scientists Engaged in Education Research Center at the University of Georgia*

*Dana Kirin and Kristen Vroom are graduate students in the Mathematics Education Program at Portland State University*

*This is an excerpt from a longer article that has been submitted to the Notices of the AMS. A draft of the full submission can be found **here**.*

On May 30–31, 2019, representatives of the departments of mathematics at 24 public and private universities met in Lincoln, Nebraska to share department-wide efforts to improve teaching and learning in their precalculus through single variable calculus sequence. These departments are working either with *Progress through Calculus* (PtC, NSF # 1430540), run by the Mathematical Association of America (MAA), or *Student Engagement in Mathematics through an Institutional Network for Active Learning* (SEMINAL, NSF #’s 1624643, 1624610, 1624628, and 1624639), a project of the Association of Public and Land-Grant Universities (APLU). Representatives reported on course coordination, collection and use of data, variations in course structure, training for graduate teaching assistants, culturally responsive teaching, use of active learning, and efforts directed at changing departmental culture.

Banner from the SEMINAL homepage.

The conference generated a great deal of energy and excitement, a sense that real change is happening. And yet, to many this activity seems reminiscent of the Calculus Reform efforts of the late 1980s and early ‘90s. I am often asked what is different now. Is this simply another iteration of a doomed effort to change these pivotal courses?

It is important to recognize that the Calculus Reform effort was not a failure. It made a real difference as can be seen by comparing textbooks of the 1980s and today. It emphasized the importance of graphical representations, verbal descriptions and communication skills, as well as projects and deep explorations of selected topics. It also served to initiate or accelerate efforts that are bearing fruit today such as Project NExT, the Scholarship of Teaching and Learning, and the explosive expansion of scholarly research into undergraduate mathematics education.

Nevertheless, those who worked at the forefront of the Calculus Reform movement had a vision that has not been realized, a vision that lives on in our current efforts. The goal is of calculus classes that engage all students in the joy of mathematical exploration and the satisfaction of deep learning, not just the memorization of procedures but the ownership of them so that their principles can be applied flexibly in unfamiliar situations.

In 1997, *The Chronicle of Higher Education *published its post-mortem of the Calculus Reform movement. The article concluded with a discouraging comment from Ed Dubinsky, one of the fathers of this effort, “Except for a small number of isolated pockets, it will be hard to tell that there was a calculus reform. [In a few years] we'll become upset that very few people are really learning calculus and we'll have another round of reforms. I hope that round survives.”

I see three reasons why this is not just a repeat of what happened thirty years ago.

First, the reform agenda that drove Calculus Reform did not disappear. Rather, it developed a lower profile that saw an acceleration of undergraduate mathematics research and a continuation of experimentation leading to a better understanding of the critical elements for improved undergraduate instruction. Today we are building on thirty years of experience. We have a more sophisticated sense of what works and what does not, of where technology can be a support and where it is a hindrance. We have accumulated data to back up assertions of best practice. Part of this building process has involved making connections to educational researchers in other STEM fields, especially the physics education research community, but also in biology, chemistry, geology, and engineering education.

Second, in 1990 the argument could be made that undergraduate mathematics education was working. The work of CUPM in the 1960s that shaped or current curriculum was directed toward students who would be ready to start calculus when they got to college, predominantly middle class white males. In the 1960s, they constituted over half of college graduates. While the percentage of Bachelor’s degrees going to white males had dropped to 38% by 1990, we were still producing adequate numbers of scientists and engineers who were able to meet the challenges they would face. Moreover, the experiments of the early Calculus Reform could and sometimes did go wrong. Departments were often reluctant to run the risk of changing the approaches then in place. Adding to this reluctance was the widespread belief that whatever was wrong was the fault of K-12 education, not the colleges and universities.

Today, the flaws of traditional methods for teaching calculus are far more apparent. The reaction to Calculus Reform that asserted high failure rates to be the price of improving K-12 education is now totally unacceptable. Presidents, provosts, and deans have come to recognize the cost to their institutions of high failure rates. There is pressure both to improve passing rates *and* to ensure those students go on to succeed in their subsequent courses. In too many cases, the status quo accomplishes neither. This is the economic argument for embracing changes that we know work. Such economic imperatives carry a lot of weight.

Combined with these pressures is the recognition that white males now barely exceed a quarter of college graduates, and demands for the employ of the mathematical sciences have been expanding and changing in fundamental ways. ASA’s GAISE report for undergraduate statistics (ASA, 2016), COMAP and SIAM’s GAIMME report (COMAP and SIAM, 2019), and especially *The Mathematical Sciences in 2025 *(NRC, 2013) make it clear that today’s undergraduate preparation in mathematics must be more than a proving ground where students demonstrate that they can survive the curriculum of the 1960s. It must actually begin the process of equipping them for the challenges they will face in the changing landscape of 21st century workforce demands.

Third, the goals this time around are very different. There was a naiveté to the Calculus Reform movement, believing that if we built the ideal calculus curriculum then mathematicians would embrace it and adapt to the demands it made on how they taught calculus. Today the focus is on training and support for new generations of educators. We are learning that we must demonstrate how to promote student engagement in the kind of learning that will lead to ownership of the concepts and methods. We are now learning what it takes to prepare and support graduate students and new faculty as well as experienced faculty to teach in this way.

While a bit simplistic, the third thing that is different this time can be summarized as a shift from an emphasis on what is taught to how it is taught. This is combined with the recognition that teaching for meaningful learning is not easy. Building the structures that support it requires buy-in from the dean of science, the department chair, a core of senior faculty, and one’s colleagues both in the department and beyond.

Calculus Reform was not a movement that came and went. It was the opening of a multi-decade effort that only now is truly beginning to blossom.

**References**

American Statistical Association (ASA). (2016). Guidelines for Assessment and Instruction in Statistics Education College Report 2016. Alexandria, VA: ASA. Available at http://www.amstat.org/education/gaise.

Consortium for Mathematics and Its Applications (COMAP) and Society for Industrial and Applied Mathematics (SIAM). (2019). *Guidelines for Assessment and Instruction in Mathematical Modeling Education*, 2nd edition. Philadelphia, PA: SIAM. Available at https://www.siam.org/Portals/0/Publications/Reports/GAIMME_2ED/GAIMME-2nd-ed-final-online-viewing-color.pdf

National Research Council (NRC). (2013). *The Mathematical Sciences in 2025*. Washington, DC: The National Academies Press. https://doi.org/10.17226/15269.

]]>

A tweet from educator (and original MOOC co-creator) Stephen Downes (@oldaily) caught my eye as I was about to get down to writing this month’s post. In standard Downes fashion, the tweet took me to a short commentary on his blog with a link to the longer post he was commenting on, written by Nick Shackleton-Jones (formerly Head of Online and Informal Learning for the BBC in the UK, who as of today I follow on Twitter).

The observation in Downes’ summary that jumped out at me was this:

“Cognitive load will make some difference to problem solving but what really matters is whether or not someone cares about solving the problem.”

I sensed that what I would find in the article was a discussion having relevance to some mathematics-education ideas I have been working on for a couple of decades now. As it turned out, however, it was more than just relevant. It aligned well with an entire train of research into the notion of *information* that I and others have been pursuing since the early 1980s. That research (centered at, but not exclusive to, Stanford University in the heart of Silicon Valley) was originally motivated not by education but the design of effective information processing technologies, and it was only later, after I served a term on the US Mathematical Sciences Education Board (MSEB), that I found myself following that research thread into the world of math ed. More on that later. Meanwhile, with my originally intended August post now pushed to the back burner, let me pursue the issue Downes’ commentary pointed me too.

Cognitive Load Theory (CLT) is an educational theory developed in the 1980s in the course of studying problem solving. It is based on the notion that people have limited mental capacity for processing information, and argues that learning experiences and materials should be designed in a way that takes account of those limitations. (In particular, it views problem solving as a form of information processing.)

The article by Shackleton-Jones provides a brief summary and history of CLT, before moving on to the thrust of the point he wants to make. He says, in a provocative way guaranteed to get my attention: “Though there is nothing fundamentally wrong with this idea, it risks distracting us from the things that really matter, when people learn.”

Ideally, I suggest you checkout the S-J article (the “longer post” link above) before progressing here, but for completeness let me just remark that CLT views the brain as an organ for processing information. (That was the trigger to my earlier IT-related research; more on that later.) In particular, it assumes that reasoning and problem solving involves retrieving information from the brain’s long term memory and storing it temporarily in (short term) working memory, where the actual problem solving is assumed to take place. While long term memory appears to have effectively limitless capacity, working memory is highly limited. (Hence the significance of the “cognitive load.”)

As S-J notes, there is evidence to support this (theoretical) information processing structure and the limitations on cognitive load, and it has proved to be a useful perspective from which to describe and understand purposeful mental activity, leading to concrete advice for educators.

[Whether it tells us how our brain *actually* solves problems is a different matter. None of us have access to what goes on inside our heads, and there is no reason to assume we have concepts and a language capable of describing its behavior “accurately.” All talk of information and information processing is just that: talk. What makes adoption of the *information stance * (as I, and others, call it) towards the brain useful is that it is, indeed *useful *: it provides a way of understanding, with a language to communicate about mental activity, and provides a framework for creating systems and technologies that aid us in our mental work.]

For instance, CLT tells us that if an instructor displays a passage of text on a screen, it is wise to give the audience time to read it before moving on. The advantages of reading it aloud, with emphases and comments, are likely to be lost if the audience try to read the text (as they surely will) at the same time as they attempt to follow an audio narration. Working memory simply cannot give equal attention to incoming audio and visual information streams at the same time. The cognitive load is too great. One channel has to be de-emphasized. We can do that quite easily if, say, the audio channel is music, when we can generally read the text just fine while retaining just an overall awareness of the music. But trying to read and hear the same text at the same time causes overload, and in general the consequence is that neither succeeds.

See the S-J article for more. The author also provides pointers to more substantial treatments of CLT, some with a focus on mathematics instruction, in particular the relative merits of worked examples versus practice problem-solving.

In fact, let me suggest that at this stage you** really should** read the S-J article. It is clearly written and easy to follow, and hence, in an age of instant access to original sources, it would serve no purpose for me to summarize here what is already an excellent summary.

So, moving on to what for me was the punchline of S-J’s essay, he writes:

“Although Cognitive Load Theory has some worthwhile applications, it risks distracting us from more important variables affecting learning, some key concepts are poorly or completely undefined, and it is narrow in application. Overall it may lead us to focus on the presentation of the material we are teaching, rather than the learning process and the learner.”

After some discussion in which he elaborates that observation, S-J narrows in on the specific point he wants to make:

“The major objection to Cognitive Load Theory is not, therefore, that it is wrong – but that is it a distraction from more important aspects of the learning process. The cognitive effects that it describes are most applicable in a relatively narrow range of contexts created by the education system and therefore have limited applicability to real-world learning. Even worse, they risk distracting us from what is really happening when we learn.”

This is the part that in particular leapt out to me: “applicable in a relatively narrow range of contexts created *by the education system* and therefore have limited applicability to real-world learning.” [My emphasis.] Replace “real-world learning” by “real-world problem solving” in that quotation and we are squarely back in the realm I have been pursuing in some of my more recent *Devlin’s Angle *posts, where I have been advocating making the primary goal of systemic mathematics education the development of the capacity to *solve real-world problems* (using the full plethora of technological tools available today) — as opposed to mastery of a range of mathematical procedures mandated by some committee or other.

Far more important to achieving good learning, S-J argues, is to design learning experiences based on what he refers to as the Affective Context Model. (That link takes you to a very short introduction.) The name is, I believe, due to S-J, who has been advocating its use for some years. Though the ACM was new to me, and I suspect to many readers, the notion of affective learning has been knocking around the educational world for decades, going back to the introduction of Bloom’s Taxonomy in the 1950s.

Actually, what I just said is not entirely true. The *name * “affective context model” was new to me, but the concept was very familiar from the work I and others did at Stanford’s Center for the Study of Language and Information, starting in the early 1980s. [I started collaborating with CSLI in the mid-1980s, just after the work got underway, spending two years at the center from 1987 to 1989, and returning as the Executive Director in 2001.]

As I noted earlier, the motivation I and many others had at the time was not education (though a few of the researchers associated with CSLI were focused on education from the getgo), rather understanding the notion of information in a way that could lead to the productive design of effective information technologies that people would find easy and natural to use.

We began by trying to reach an agreement as to what exactly “information” is — more precisely, what we should take it to be — with view to crafting a definition that could support a scientific theory. The highly multidisciplinary structure of CSLI was intended to provide a research community that had a reasonable chance of succeeding in that enterprise (among other goals).

[The funding for CSLI — the founding award in 1983 was $23M, equivalent to around $60M today — came from the System Development Foundation, a non-profit spinoff from the RAND Corporation that designed and built much of the US IT infrastructure in the 1950s and 60s. Having helped build a national IT infrastructure, the folks at RAND thought it would be helpful to retrofit a scientific theory of information, in order to understand just *what* it was that IT was “processing,” and guide future IT developments.]

In 1991, I wrote a book, Logic and Information, that described much of that early work. It developed a mathematical model of what information is, how it arises, how it can be encoded, and how it can be transmitted. Recognizing that when people talk of information, it is almost always highly contextual, the theory was built on a grounding theory of contexts, called Situation Theory by the two researchers who initially developed it, the mathematician Jon Barwise (deceased) and the philosopher John Perry.

Though modern society tends to view information as a commodity (indeed, the theory has theoretical entities called *infons* — items of information) that can be created, transported, bought and sold, and consumed, the framework we eventually developed presents a much more complex picture of multiple, interacting contexts. Over the years following the initial development of the theory, the rubber hit the road as it started to be applied to a variety of real-world situations, among them human communication, education, manufacturing, production-line design (including automobiles), silicon-chip design, Space Station planning, and intelligence analysis. What became clear from that applied work was that the theoretical (hence naive) perspective described in *Logic and Information*, whereby contexts were treated as supporting characters in the creation and transmission of information, obscured the overwhelming fact that *contexts were the main drivers*. The *information * transmitted by any given signal depends fundamentally on the originating context at the time of issuance and the receiving context at the time of receipt. The same signal (word, sentence, email, etc.) can convey very different information under different circumstances. That massive context dependency is where the theory’s main focus has to be, both for study and for application. The mathematical notion of information we developed within Situation Theory is used only as an artifactual prop to facilitate the study and discussion of the way the pertinent contexts interact. (I wrote a subsequent book, InfoSense, in 1999 that was all about contexts.) The idea of information as a commodity to be created, shipped around, and consumed, which arose in the nineteenth century with the growth of mass produced daily newspapers (it’s not hard to see why that could have been the cause), is hopelessly inadequate in a world having today’s information technologies and the high degree of instant global connectivity, where many contexts can be in play. In the final analysis, talk about “information flow” (the classic, commodity view) is just one (albeit very useful and productive) way to view the way people (or societies) act and interact.

No surprise then, given that background, that the moment I read S-J’s account of the significance for learning of *affective context*, I knew at once he was onto something. In fact, I am sure he is onto THE thing. (Actually, it’s just one “the thing”. While learning is a natural ability *Homo sapiens* acquired through evolution, education is a complicated human-created activity with many facets. Despite coming from different backgrounds, as we do, I am sure that S-J and I agree on that. (We have yet to interact, by the way.)

So, if you have so far resisted looking at S-J’s article, let me make one last attempt to persuade you to do so. That, in fact, is the main goal of this month’s post.

]]>Since 2003, the SIMIODE *(Systemic Initiative for Modeling Investigations and Opportunities in Differential Equations)* organization has been promoting a modeling-first approach to teaching undergraduate differential equations. With recent NSF support, they are expanding their offerings of course materials and training workshops. In this blog, I get to interview my co-PI on the project, Dr. Therese Shelton of Southwestern University.

*Where did the idea for this project come from?*

SIMIODE is a well-established community of faculty and students who are interested in using mathematical modeling to teach differential equations. SIMIODE held multiple successful events including a 2015 MAA Workshop offered through the former Professional Enhancement Program, two successful developer workshops, and multiple events at professional meetings. The time seemed ripe to elevate and expand the efforts, and we knew with NSF funding we could reach even more people!

**NSF grants are competitive - what do you think it is that set your proposal apart and got the project funded?**

*SIMIODE *has a clear mission to encourage and support faculty to use modeling to motivate student learning in differential equations. The mathematical-modeling approach uses best practices in STEM undergraduate teaching: it is hands-on, problem-based, inquiry-driven, and incorporates collaborative learning. We also had a clear plan to use the funding to offer developer and practitioner workshops. Developer Workshops enable faculty from around the country to create teaching materials that focus on real world data and encourage students to transfer knowledge between mathematics and other disciplines. Practitioner Workshops combine in-person and virtual support for faculty seeking to adopt a mathematical-modeling approach in their own classrooms.

**What new innovation does this project bring? **

We support different levels of faculty engagement: some faculty will introduce a few modeling activities into their classes while others will complete a full curricular shift to mathematical models. With our vetted materials, teachers can jump right in with a new teaching style. Furthermore, we enable creative professionals to develop new, engaging materials to expand the peer-reviewed repository for others to use.

**How will STEM students be better off as a result of your project?**

Students across many STEM disciplines take differential equations, so large-scale curriculum improvements in this course can have a transformational impact on STEM graduates nationwide. Our hope is that making this course more engaging and applicable will result in better retention and preparation of STEM students to tackle real-world problems and translate their skills across disciplines.

**What have you learned so far in this project? What’s the biggest adjustment you’ve had to make? **

We have learned that there is a desire to incorporate more realistic applications in mathematics courses, particularly in differential equations. Faculty are interested in using modeling scenarios to motivate the mathematics before introducing the technical mathematics and also willing to try projects and design their own projects. We are making adjustments in how we encourage new developers to submit their materials so others can use them and how we continue to build a strong community where faculty engage and comment on the materials.

**Tell us about the people involved in the project**.

The project is the brainchild of Brian Winkel, founder of SIMIODE, whose enthusiasm is contagious. Winkel assembled a diverse team from across the country who complement each other as co-PIs: R. Corban Harwood from George Fox University in Newberg, OR; Audrey Malagon from Virginia Wesleyan University, Virginia Beach, VA; Patrice Tiffany from Manhattan College, Riverdale NY; and myself, Therese Shelton from Southwestern University in Georgetown, TX. In addition, there are numerous others affiliated with the grant implementation and assessment, and there are many who have already benefited from their participation in grant activities.

**Tell us about someone impacted by the project. **

Eric Stachura is enthusiastic about the impact of the MINDE 2018 workshop and the SIMIODE materials:

“When I was first exposed to SIMIODE materials, I was slightly reluctant about using them, and for me the Spring 2018 differential equations course was somehow a test for me. Having seen how wonderfully the problems were received, though, and then having attended the MINDE workshop in Summer 2018, I am completely convinced by the value of this community. My goal is to stay involved as much as I can, develop new scenarios for the community and to use in my own courses..”

Learn more about NSF DUE 1724796

Full Project Name: Building Community Through Systemic Initiative for Modeling Investigations and Opportunities with Differential Equations (SIMIODE)

Abstract: https://nsf.gov/awardsearch/showAward?AWD_ID=1724796

Project Website: www.simiode.org

Project Contact: Brian Winkel, PI, director@simiode.org

*For more information on any of these programs, follow the links, and follow these** blog posts**. This blog is a project of the Mathematical Association of America, produced with financial support of NSF DUE Grant #1626337.*

*Audrey Malagon is lead editor of DUE Point and a Batten Associate Professor of Mathematics at Virginia Wesleyan University with research interests in inquiry based and active learning, election security, and Lie algebras. Find her on Twitter** @malagonmath**.*

Hello, world! We are Omayra Ortega, Anisah Nu’Man, Haydee Lindo, Jamylle Carter, Jacqueline Brannon-Giles, and Leona Harris; and we are the members of the new NAM editorial board for MAA Math Values! Watch this space for posts on teaching ideas, opportunities, current events, and cool math concepts. While you may not hear everyone’s voice in each post, all six of us contribute to each blog post in some way. What’s more, as women of color with degrees in mathematics, *we’re rare birds*! We reside in California, Texas, Georgia, Massachusetts, and Washington DC, and are in various stages of our careers. We are a geographically, politically, mathematically, religiously, and culturally diverse group of women, who are all members of the National Association of Mathematicians (NAM). We plan to leverage the diversity of personal experience in this group to contribute thought-provoking posts to the MAA Math Values Blog.

We are currently looking forward to MAA MathFest, where there are many wonderful NAM sessions planned. MAA MathFest, which began in 1997, was inspired by an earlier NAM event of the same name, MATHFest, that began just 6 years before, so it makes sense that MAA and NAM would continue to collaborate on these events, this blog, and many other activities!

One highlight of this year’s MAA MathFest will be the 25th annual David Harold Blackwell Invited Address, *Dudeney's No Three-In-Line Problem: Problem, Solutions, Conditions, Progress, and Conjectures*, which will be given by Dr. Johnny Houston on Aug. 2 at 4 p.m. in Ballroom A of the Convention Center. This is a historically momentous year to attend the invited address because April 24, 2019 was David Blackwell’s 100th birthday! Dr. Blackwell had to break through many barriers to study, teach, and do research in mathematics. Despite the discrimination he experienced, Dr. Blackwell went on to win the John von Neumann Theory Prize in 1979 and was the first African-American to be inducted into the National Academy of Sciences.

David Blackwell

NAM is organizing several special sessions at MAA MathFest in Cincinnati, OH, to honor this great man’s legacy, as well as the 50th anniversary of the National Association of Mathematicians. Through his contributions to research and service to the profession, Dr. Blackwell had a lasting impact which is still palpable today. Dr. Blackwell began his career teaching at Southern University, Clark College (now Clark Atlanta University), and served as the chair of the Department of Mathematics at Howard University. He then joined the faculty at the University of California, Berkeley where he was a beloved professor for 34 years and retired in 1988. During his career, Dr. Blackwell served as an officer with multiple professional organizations and supervised over sixty PhD students. He passed away at the age of 91. Dr. Blackwell will be celebrated at MAA MathFest with the following lectures and events:

**Dr. David Blackwell Centennial Celebration Events at the MAA MathFest **

**Thursday, Aug. 1, 2019, DECC Room 232**

1:30-1:50 p.m., Dr. Ronald E. Mickens (Clark Atlanta University),

*The Alternative Universes of David Blackwell and William Claytor*1:50-2:10 p.m., Asamoah Nkwanta (Morgan State University),

*Game Theory: A Survey of an Intriguing Contribution of David Blackwell*2:10-2:30 p.m., Mark E. Lewis (Cornell University),

*Blackwell’s Contribution to Dynamic Programming*2:30-2:50 p.m., Kimberly S. Weems (North Carolina Central University),

*David Blackwell: Bayesian Statistics and Contributions to the Statistics Community*2:50-3:10 p.m., Carlos Castillo-Chavez (Arizona State University/Brown University),

*Blackwell-Tapia 2008-2018*3:10-3:30 p.m., Richard Tapia (Rice University),

*Behind the Scenes: the David Blackwell that I Knew*

In addition, keep an eye out for more information on the NAM-MAA Blackwell Birthday Party on Friday, Aug. 2, 2019, at 5 p.m.

We are also looking forward to some other intriguing invited addresses at the MAA MathFest including Dr. Ami Radunskaya’s MAA Invited Lecture *Uncertainty: The Mathematics We Don’t Know* on Aug. 1 at 9 a.m., Dr. Mohamed Omar’s Lecture *Secrets of Grad School Success *on Aug. 1 at 1:30 p.m., and Dr. Rochelle Gutierrez’s MAA James R.C. Leitzel Lecture *What’s at Stake in Rehumanizing Mathematics? *on Aug. 3 at 9 a.m.

MAA MathFest is shaping up to be a great event this year, with so many of the invited lectures, sessions, and workshops planned with an eye towards #inclusivity, #community, #communication, and #teachingandlearning. These are the main themes of the MAA Math Values Blog and they point to the huge mental shift that has begun to address diversity, equity, and inclusion within the mathematical sciences. While great strides are being made, there is still more work to be done. This is why we are highlighting a snapshot of sessions and lectures that are either presented by speakers from varied backgrounds or are on topics related to diversity. We do this to help foster the growth we want to see within our community.

This year’s MAA MathFest also includes a Contributed Paper Session entitled *Diversity, Equity, and Inclusion in Mathematics* which will address methods to engage diverse student populations, in particular, underrepresented minority, first-generation, low-income, and female students. Below are a few talks from the session you might find interesting. See the program for the names of the speakers and their institutions.

**Contributed Paper Session: Diversity, Equity, and Inclusion in Mathematics **

*Beyond Leaky Pipes: Fostering Pathways and Persistence in the Mathematical Sciences*

*Change Is a Thing You Can Count On: Adjusting to Meet Diverse Student Needs*

*Diversifying and Humanizing Mathematics through Community Collaboration*

*Inclusive Teaching and Learning of Mathematics in an Afterschool Math Enrichment Program for Underrepresented Minority, First-Generation, Low-Income Students*

*Recruitment, Resilience, and Reaching Higher vis Early Research Experiences*

*The NREUP and Howard’s Program*

*Supporting the Transition to Undergraduate Mathematics: Collaborative Learning and Mentoring in Teams*

You can find a complete listing of talks and presenters with MAA MathFest online.

We hope you enjoy our post and be sure to stop by next month for a new post on mathematics.

*Dr. Anisah Nu’Man is an assistant professor of mathematics at Spelman College. Originally, from Atlanta, GA Dr. Nu’Man obtained her Ph.D. from the University of Nebraska-Lincoln. Her research interests lie in geometric group theory. *

*Dr. Omayra Ortega is an assistant professor of applied mathematics and statistics at Sonoma State University and serves as the chair of the publications and publicity committee for the National Association of Mathematicians. She uses the tools from statistics, mathematics, public health and epidemiology, to tackle emerging health issues through her teaching and scholarship. Dr. Ortega is deeply committed to broadening the participation of underrepresented minorities in STEM and mentoring students through the challenges of academia.*

Image by Nick Koberstein

In my job as a mathematics educator, I have had the good fortune of working with high school students and teachers in various capacities. Most recently I have been teaching Precalculus and Calculus courses. As the school year winds down, I ask my students to look back over the entire year and reflect on their growth as learners and doers of mathematics. They often write about how much they value their time in class working collaboratively with their classmates, learning from diverse perspectives or novel approaches to problems.

One of my Precalculus students this year wrote, “Group work is extremely helpful. I like that we can reason through problems together and come to a conclusion, especially when we’re trying to figure out a new type of problem. It’s a lot easier when I’m hearing other people’s thoughts because we can build off of each other’s ideas in order to figure out how to solve a problem. I think these different methods of learning are all very effective in helping me learn and remember things we do in class.”

Inevitably, students write about how much they appreciate the opportunity to work on problems that involve some hands-on component or a problem that connects the math they are learning to the “real world”.

Here are a couple of testimonies from my Precalculus and Calculus students:

“I enjoy that optimization, and other modeling problems focused on real world issues. Instead of having an unrealistic problem, I can use an example that relates to something I have actually experienced, which therefore allows me to apply that same method of problem solving in the future.”

Image by Nick Koberstein

“I think problems like the hockey problem are my favorite in the classroom, because it requires us to be ‘math investigators’ where we need to solve the problem ourselves. There is no textbook telling us if we are on the right track. Sometimes homework problems and textbook problems can feel artificial, like there is a certain skill of integration or finding derivatives that it wants us to master. Application problems serve as opportunities for us to use a skill in a way I find quite fun. I love going to the board with other students and trying stuff out.”

Students also include how the struggle they experience when faced with a challenging problem lead them to feel a great sense of accomplishment when they finally solved that challenging problem.

One student wrote, “Math for me was mostly focused on doing what the teacher wanted. Doing the first problem here opened my eyes to real math problems. While trying to solve the problem, I became thoroughly confused and frustrated. Solving it with my group felt great since it felt like we accomplished something.”

The “good” stuff I am referring to in the title of this blog post are those opportunities that I offer my students – the challenging modeling problem or the chance for students to explore their own ideas and share their solutions in their own words.

The valued experiences described in these students’ reflections are what I will strive to hold onto throughout the following school year as I feel pressed for time and am tempted to skip the “good” stuff. When we hit late March or early April, my heart starts racing, and I can feel the anxiety creep in as I realize that we are running out of time. That’s when I am most tempted to abandon student-centered teaching strategies that can offer students the chance to lead the way or be the mathematical authority in the classroom.

I think to myself, “Won’t it be better if I just revert to the old ways of teaching where I show students a procedure, have them practice the procedure and then assign 10 problems for homework? Just for today or this week, so I can cover more material and catch up.” That’s when I have to take a step back and re-read a previous student’s end-of-year reflection to remind myself that I shouldn’t skip the “good” stuff. I have to push aside the anxiety and focus on what I hope my students will remember about the course down the road or what my colleague, Dan Teague, refers to as the “residue” of our mathematics courses.

So as another school year ends, I appreciate the time to reflect on my practice and remember that including the “good” stuff is not done in lieu of covering mathematical content. My hope is that students may have built a deeper understanding of that content through their engagement in the “good” stuff and that the specific mathematical content might be included in the “residue” for the students in my courses.

In my work with teachers across the country, I have learned that making time for the “good” stuff is particularly challenging when teachers and students are subject to the pressures of end-of-course tests and final exams scores.

In my next blog post, I will share lessons learned on how these teachers have found ways to make incremental changes in their practice as they work to infuse math modeling into their courses and incorporate student-centered teaching strategies in their classrooms.

For examples of math modeling problems appropriate for K-16 students and support on how to implement these in your classes, please refer to The Guidelines for Assessment and Instruction in Mathematical Modeling Education (GAIMME) Report.

]]>I am reading a book, Robin DiAngelo’s “White Fragility,” that carefully discusses the complex and nuanced relationships between white privilege, prejudice, and racism. DiAngelo also notes the role of ideology in preserving existing social ideals and structures, such as individualism and consumerism, and how hard it is for someone raised inside a culture where seemingly shared values are transmitted through educational, cultural, and civic systems to escape accepting the norms. In another book, *What Does It Mean to be White?* DiAngelo notes the role of allocation of resources to public schools based substantially on property taxes in maintaining the status quo of white privilege.

A recent article in the New York Times which reconsiders the single-home ideal in some U.S. cities reminded me of this last point. While many cities suffer from a lack of adequate and affordable housing, zoning laws across the nation severely limit the addition of multi-family units in substantial portions of metropolitan areas facing these shortages. My view, reinforced by the article, is that such laws exist largely to maintain the social, economic, and political power of those who have managed to obtain homes in the protected areas. In particular, as DiAngelo notes in her work, there is a strong connection between residential patterns, school districting, and the resources ultimately available to the schools, as well as the opportunities afforded to the students attending them.

Perhaps more importantly, both DiAngelo’s work and the article help bring attention to how hard it is for those who benefit most from existing structures to recognize the harm those structures may cause large segments of our fellow citizens, and accept the need to change. This is especially true when that change will likely impact them in ways that diminish their privilege even if ultimately that change holds promise to strengthen and enrich the society in which they live, thus improving their quality of life, too!

This brings me to my topic, a new book published by the MAA and the AMS, Living Proof: Stories of Resilience Along the Mathematical Journey. It is available for free download and a complimentary copy of the book will be sent to math departments across the country early this fall. This book, a collection of short essays by mathematicians from diverse backgrounds is intended to inspire others to persevere and succeed in the face of difficulties. The title offers a diverse collection of painful stories of individuals who managed to succeed in spite of the institutional barriers and hostile or thoughtless treatment by fellow students and faculty. Many of these stories reflect the varied experiences the authors had as they progressed through schools with fewer resources than many of us from more privileged backgrounds are accustomed.

Perhaps more immediately important for those of us who make our careers in higher education, *Living Proof* makes it clear that the structure of higher education, and the expectations and norms of our profession, often present barriers for students who have great potential to succeed in and contribute to the mathematical sciences enterprise.

As the editors write in the preface,

This project grew out of conversations with students about the difficulties inherent in the study of mathematics. Many undergraduates have not yet learned to embrace the ups and downs that each of us faces as we make our way through the discipline [...] there are insecurities about their own abilities, uncertainty about whether they have made the correct choice for a major, and a myriad of other emotions. And these are just the things that rise to the surface. For many students, there are also stereotypes and identity issues that influence their attitudes toward the discipline.

Math should be difficult, as should any worthwhile endeavor. But it should not be crippling. The ability to succeed in a mathematical program should not be hindered by a person’s gender, race, sexuality, upbringing, culture, socio-economic status, educational background, or any other attribute. Our primary goal in collecting this volume of essays is to push the conversation forward.

Those featured in *Living Proof* include 3 MAA past presidents, and one future MAA president; the current Associate Secretary of the MAA; the current editors of *MAA FOCUS* and *The College Mathematics Journal*; two prior members of the MAA Board of Directors; and others who are active in a variety of other leadership roles in the MAA. Many are in senior positions at academic institutions. In other words, those featured in the book are quite accomplished, and are now in leadership positions in the system with which they struggled.

This makes their call for change, and their willingness to embrace their roles as mentors and role models for future professionals, all the more powerful.

I am pleased to note how, collectively, the volume speaks to the critical importance of MAA’s core values -- community, inclusivity, communication, and teaching and learning. All of those featured in *Living Proof* have shared their own vulnerabilities and experiences of not belonging. I share their vision that their willingness to again make themselves vulnerable as a step towards our profession reckoning with the challenges we still face to embrace the full human and mathematical potential of our students, colleagues, and fellow citizens, all of whom rely on mathematics in ways large and small as they move through their lives.

I invite you to download and explore Living Proof. You will almost certainly find stories that resonate with you, some more and some less. But I also ask that you look beyond what makes sense to you personally, and make an effort to view the experiences the authors describe through their eyes. For it is only through understanding and empathy that we will make progress towards achieving MAA’s vision of a society that values the power and beauty of mathematics and fully realizes its potential to promote human flourishing.

]]>One of the interesting things about working at the MAA in Washington, DC is that people talk politics and policy. Honestly, I find it hard to keep up or even to connect the names of people with their jobs. But I like being more politically aware and thinking about connections between policy and our MAA work to advance the understanding of mathematics and its impact on the world.

I don’t often think of social media as the best source of political information. So I was delighted when I saw a post from mathematician Dr. Jeanne Clelland discussing the recent Supreme Court decision on gerrymandering. I had enjoyed a plenary talk this spring by Jeanne at this year’s MAA Rocky Mountain Section meeting. After seeing the post, I asked Jeanne if we could reprint it here and she said yes!

The original gerrymander. In 1812, Elbridge Gerry, the governor of Massachusetts, signed a bill that redistricted Massachusetts to benefit the Democratic-Republican party. The Boston Gazette coined the term "Gerry-mander" for a salamander-shaped district in the Boston area. Image Source: https://upload.wikimedia.org/wikipedia/commons/9/96/The_Gerry-Mander_Edit.png

**Dr. Jeanne Clelland’s post**

After reading the entire text of the recent Supreme Court ruling on gerrymandering and commenting briefly on a friend’s Facebook post, my friend asked me to write a longer, public post with more details. What follows here is (with minor editing) the resulting post, written quickly and late at night after a long flight. I was very pleasantly surprised when Ray asked me if she could post it here on Math Values; I never imagined it might reach such a wide audience!

First, a bit of historical perspective: Over the last several decades, the Supreme Court has ruled many times on racial gerrymandering cases under the auspices of the Voting Rights Act. As is often the case, it’s not so easy to articulate a precise standard for what does and does not constitute illegal racial gerrymandering, but the courts have gradually worked it out through a series of cases, and this process is, of course, ongoing. Meanwhile, in cases involving partisan gerrymandering, the Supreme Court has repeatedly indicated that it considered such claims justiciable, but due to the lack of a clear and manageable standard for measuring it, the Court has never actually declared any particular map to be unconstitutional on the grounds of excessive partisan gerrymandering. Justice Kennedy, in particular, clearly indicated that he would very much like for someone to come up with such a standard, but then he retired last year before identifying a standard that he found satisfactory.

Enter the mathematicians! Just in the last few years, several groups have pioneered a strategy for quantifying gerrymandering based on statistical sampling and outlier analysis. The basic idea is to take all the rules that a particular state requires a districting plan to satisfy - e.g., districts must be contiguous, relatively compact (whatever that means), they should try not to divide communities of interest, etc. - and have a computer draw a large random sample (an “ensemble”) of districting plans that satisfy all the necessary criteria. Then take precinct-level voting data from recent elections, and for each plan the computer drew, compute the number of seats that each party would have won with that plan and the actual voting data. This typically results in a bell curve describing how many seats one might expect each party to win under a politically neutral plan. If a particular plan - say, one drawn by a highly partisan state legislature - yields a result that’s way out on the tail of the curve, that’s pretty strong evidence of gerrymandering. For North Carolina in particular, much of this analysis was carried out by Jonathan Mattingly’s research group at Duke; their work is described in detail here, with lots of graphs to illustrate their analysis: https://arxiv.org/abs/1801.03783

What’s really powerful about this approach is that it takes the political geography of a state into account in a way that simpler measures (like, say, the much-touted efficiency gap) cannot, and it provides a more nuanced picture of what’s actually reasonable to expect. The mean of the ensemble may or may not reflect proportional representation; indeed, it often does not. A particularly striking example is Massachusetts, where Republicans consistently get 30-40% of the statewide vote in Congressional elections, but no Republican has been elected to Congress since 1994. A recent paper by Moon Duchin’s group, available at https://arxiv.org/abs/1810.09051, shows that this is not due to gerrymandering, but rather to the fact that Republicans are simply too spread out throughout the state, to such an extent that it is, in fact, mathematically IMPOSSIBLE to draw a majority-Republican Congressional district no matter how you draw the lines. A simple measure like the efficiency gap would flag Massachusetts as an egregious gerrymander, but ensemble analysis shows that this outcome is simply a consequence of how Republicans are geographically distributed throughout the state.

In this week’s decision, Justice Roberts basically threw up his hands and declared that the search for a manageable judicial standard for measuring gerrymandering is hopeless, and therefore such claims will no longer be considered justiciable in federal courts. He seemed not to clearly understand the mathematical argument; he repeatedly referred to the proposed outlier analysis as attempting to measure deviation from proportional representation, which it absolutely does NOT do. More significantly, he opined that it was not the Court’s business to decide how much deviation was permissible, and that therefore the entire question should be left up to the states and to Congress.

In Justice Kagan’s scathing dissent, on the other hand, she made it clear that she understands the math and believes that it could and should form the basis for a judicial standard. She did not attempt to set a clear threshold for how much deviation from the mean should be permissible, but she thinks that in the North Carolina and Maryland cases at hand - both of which are way out on the tails of their respective bell curves - the Court should say, “This much is DEFINITELY too much.” Then it would be up to future litigation, legislation, etc. to work out the question of where to set limits on how much deviation from the mean is permissible, similar to the process that has played out for racial gerrymandering.

The upshot is basically that the status quo will remain, at least for now. The decision explicitly cedes the power to regulate gerrymandering to the states and to Congress - so the Court will not, for instance, strike down the initiatives that have passed in some states to create independent redistricting commissions. I would like to think that Justice Kagan’s dissent might provide some states, and maybe even (some future) Congress, with a template for legislation that might put some sensible limits on partisan gerrymandering - and meanwhile, we mathematicians will continue to work on developing and improving these methods so that we can contribute to the conversation at any and all levels!

*Want to learn more? Check out the Metric Geometry and Gerrymandering Group: **mggg.org**. MGGG is led by Moon Duchin of Tufts University and Justin Solomon of MIT.*

*Jeanne Clelland is a Professor of Mathematics at the University of Colorado, Boulder. She is a signatory to the Amicus Brief of Mathematicians, et. al., that Justice Kagan cited in her dissent.*

A paper written by Dr. Man Keung Siu, a professor of math at the University of Hong Kong, titled “Does Society need IMO medalists?” poses an important question: how do math competitions fit in with the field of math at large? He starts by describing the International Mathematical Olympiad (IMO), including his experience with the competition, and mentions names of IMO winners that have gone to become famous mathematicians. He also ponders the relevance of the IMO winners to society and considers specifically the impact of the IMO winners on the field of math.

The 2018 US team at the International Mathematical Olympiad.

Winners of the IMO are highly trained in math competitions. As a high school student who competed in math competitions, I read Dr. Siu’s paper with interest and believe he asks an important question. Dr. Siu discusses the training and skill sets gained through preparation for the competitions. He states that training for math competitions allows students to acquire logical thinking, confidence, and “academic sincerity”.

However, he also notes that some of the drawbacks of this type of training include the ways competition problems differ from mathematical research, the potential for overtraining, and the possibility that competitive spirit is sometimes different from passion for the subject.

Personally, I believe that the points Dr. Siu brought up are fair, but based on my own experience and observations of other competitors, it seems that in order to do well in math competitions a genuine passion and drive for the subject is necessary to keep one motivated.

I agree with the point in Dr. Sui’s paper from Dr. Petar Kenderov, a math professor at the Bulgarian Academy of Sciences, who points out that math competitions disfavor students who work “slower”, as most competitions involve time pressure. Time restrictions can inhibit a student’s performance.

Kenderov also says that math competitions miss out on a fundamental aspect of math, which is posing questions and problems. Dr. Siu goes on to say something similar by mentioning that research isn’t just about the answer, but exploring a concept to the full depth.

One interesting aspect of the article is that Dr. Siu gives three examples of math problems that are solved in two different ways, to generalize that there are two fundamental methods of solving math problems. One method is the standard, longer approach of systematically solving the problem, which is typically taught in school and classroom settings, and the other is a method of finding clever ways to solve the problem in a nonconventional way, which he believes is the methodology taught in math competitions.

He states that both are crucial to the subject of math, but notes that schools don’t typically teach students math using math competition problems, which narrows most students’ horizons into thinking of math in one way. He claims that for a deeper understanding of the subject of math, all aspects should be explored. I strongly agree with Dr. Siu’s statement as the education system traditionally focuses on math solely in a way that is procedural, and it is crucial for students to see math in all facets.

To respond to Dr. Siu’s point that math competitions lack some aspects fundamental to mathematical research, like posing unique questions. However, it’s nearly impossible for tests to include all the components that provide students with a sufficient background in research. Although math competitions don’t comprehensively provide students background needed for a research project, math competitions do allow students to experience creativity in math that most wouldn’t get exposure to otherwise. That creativity is a crucial component for mathematical research.

At the end of the paper, he shifts the focus to say that “society needs friends of mathematics.” By saying this, he denotes that it is essential to have people in the world who don’t necessarily pursue math but understand the significance of mathematics to the world at large. He claims that there does not exist a substantial amount of people that not only support the field, but also comprehend the value of mathematics. I think Dr. Siu makes a phenomenal point at the end, as lack of support undermines the value of making mathematical breakthroughs because people aren’t cognizant of math as a field that impacts society.

]]>Figure 1. Cover to *Calculus Reordered*. Available at Princeton University Press.

**By David Bressoud ****@dbressoud**

This month, Princeton University Press will publish my new book, Calculus Reordered: A History of the Big Ideas (Figure 1). At one level, this is a history of calculus, a successor to such works as Boyer’s *The History of Calculus and its Conceptual Development, *Toeplitz’s *The Calculus: A Genetic Approach*, and Edwards’ *The Historical Development of the Calculus*. But this book was also written to encourage those who teach calculus to rethink how we approach this pivotal subject.

Those who regularly read my columns and articles will know that I have long railed against the way we teach integration. Whatever we may say, whatever we may intend, the message that students retain from the way we teach it is that integration is all about reversing differentiation. If it has applications, then those rely on obscure procedures that must be memorized individually. They miss the fundamental fact that integration is all about problems of accumulation. The connection to differentiation via the *Fundamental Theorem of Integral Calculus* (to give it its original and proper name) is not the essence of integration but merely a tool that can be applied in the last step of solving an accumulation problem.

To make this point, I started to write a book on the history of this fundamental theorem. In the process, I realized that the historical development of calculus illuminates more than how to teach integration. It also has important lessons for how we should approach differentiation, series, and limits. I have long embraced the belief that every course should be built around a story, a quest to answer certain burning questions. In writing this book, I sought to unearth the questions that drove the historical development of calculus. I found that this historical lens supported many of the pedagogical innovations promoted by researchers in undergraduate mathematics education: Pat Thompson’s decision to begin the study of calculus with problems of accumulation, Mike Oehrtman’s explanations of limits in terms of bounds on the output, and Marilyn Carlson’s emphasis on the co-variation of variables that actually vary.

So this is a history of calculus, but embedded within it and called out explicitly in the appendix is a cry to recast how we teach this subject. The book is built around the four big topics of single variable calculus, taken in historical order: integration as accumulation, differentiation as ratios of change, series as limits of sequences, and limits as the algebra of inequalities.

I believe that it is a serious mistake to start calculus with a discussion of limits, commonly expressed in phrases such as “the limit of *f* as *x* approaches *c* is *L*.” Implicit in the words “as *x* approaches *c*” is the assumption that that *x* cannot equal *c*. Mathematicians know that we do not literally mean “approaching,” but students assume that when someone says, “the dog approaches the door,” they have ruled out the possibility that the dog has been sitting at the door. More than this, the phrase suggests that the focus should be on the behavior of *x* and its effect on *f*. In fact, the mathematical definition of the limit begins with the problem of bounding *f* and asks what has to be done to *x* to accomplish the desired bounds. The solution is not to start with talk of limits, but of bounds, and to save a serious discussion of limits to the end of the course, in line with the historical order in which the work on limits came long after mathematicians had wrestled with integration, differentiation, and series.

I also do not like the way we treat differentiation, as a means of determining the slope of a tangent line. Slope is a slippery concept for undergraduates. As I’ve written, for most students it is simply a numerical value that indicates steepness (see The Derivative is Not the Slope of the Tangent Line, Launchings, November 2018). Few undergraduates recognize it as a ratio of changes. A better approach is to start with a historically rooted discussion of ratios of changes. I explain in my book that the first function to be differentiated was the sine, it occurred around the year 500 CE in India, and it was done to aid in the interpolation of tables of sine values. Aryabhatta needed to know how small changes in arc length were reflected in small changes in the sine. Shortly after 1600, Napier was led to the discovery of the derivative of the natural logarithm while exploring his invented function that turned multiplication into addition.

Integration is not about area under curves but rather accumulation problems. The paradigmatic example of accumulation is the determination of distance traveled from knowledge of velocity, a problem that arguably was first tackled by the Babylonians almost 2500 years ago (Ossendrijver, 2017), but which was a major topic of scholarly investigation in the 14th century. Starting calculus with problems of accumulation is the approach taken by Thompson, Milnor, and Ashbrook in Project DIRACC and about which I have written in Re-imagining the Calculus Curriculum I, Launchings, May 2017 and Re-imagining the Calculus Curriculum II, Launchings, June 2017*.* In my book, I strongly recommend it as a better way to introduce calculus.

Infinite series are too often viewed as sums with a LOT of terms, a topic tacked onto the end of the first year of calculus and dominated by rules for determining convergence. For most students, these rules possess little cohesion and present nothing more than another set of procedures to be memorized. It is unclear to me what this accomplishes. How much better to focus on Taylor polynomials. Rather than convergence tests, I would like that time spent explaining how the Lagrange error bound arises as an elegant application of the Mean Value Theorem. After all, convergence as such as less useful than the ability to bound the error when the Taylor polynomial is substituted for the actual function.

I hope that others will enjoy reading this book as much as I enjoyed writing it and find it a source of insights and explanations that can be carried into the classroom.

Read Bressoud’s Launchings archive.**References**Boyer, C.B. (1959).

Edwards, C.H., Jr. (1979). *The Historical Development of the Calculus*. New York, NY: Springer-Verlag.

Ossendrijver, M. (2018). Bisecting the trapezoid: tracing the origins of a Babylonian computation of Jupiter’s motion. *Arch. Hist. Exact Sci.* (2018) 72:145–189 https://doi.org/10.1007/s00407-018-0204-4

Toeplitz, O. (2007). *The Calculus: A Genetic Approach*. Chicago, IL: University of Chicago Press.

Last month I described my own, half-century career in mathematics, the first twenty-five years as a pure mathematician, then, in the second half, working on problems for industry and various branches of the US Department of Defense.

I observed that at no stage did I make significant use of any of the specific techniques I learned in my entire educational arc, from elementary school through to receiving my bachelors degree in mathematics. Noting that my experience was not unusual—in fact, it is the norm among professional mathematicians—I started to set the stage for a promised discussion (it’s coming up below!) as to what topics should be covered in high school mathematics. Here is what I said:

“First,

whatis taught is not, in itself, of any significance. The chances of anyone who finds they need to make use of mathematics at some point in their life or career being able to use any specific high school curriculum method is close to zero. In fact, by the time a student today graduates from university, the mathematics they may find themselves having to use may well have not been developed when they were at school. Such is the pace of change today.Second, what is crucial to effective math learning is what is sometimes called “deep learning”; the ability to think fluidly and creatively, adapting definitions and techniques already mastered, reasoning by analogy with reasoning that has worked (or not worked) on similar problems in the past, and combining (in a creative fashion) known approaches to a novel situation.”

The question for mathematics educators is, how best do we develop that way of thinking, and the understanding it depends upon?

For K-12 mathematics teachers, that’s a question only those alive today have had to face. Until the early 1990s, when we acquired readily available technologies to execute ANY mathematical procedure, the first order of the day in the school math class was drill students in enough procedural skills to be able to get by in life and maybe get a leg up onto the mathematics ladder. If you did not have good mastery of basic arithmetic, you could find yourself disadvantaged in everyday life, and if you did not master arithmetic and some basic algebra (in particular), you could not get off the ground in mathematics. The sooner such mastery could be achieved, the better for all.

For a minority of students, the “drill the basics” approach worked. I was one of them. I became a professional mathematician. To do that, I had to make one critical transition. I had to learn how to go beyond my dependency on all those basic skills and develop the ability for mathematical thinking.

This was in the mid-1960s, long before procedural math tools like Wolfram Alpha came onto the scene. Electronic calculators were just coming onto the market, so I no longer had a need for my arithmetic skills, but by then the math I faced was well beyond arithmetic, so I still had to use those basic algebra skills. But as an early *Mathematica* adopter (I was on Wolfram’s original Advisory Board), from around 1990 onwards, none of the basic skills I had mastered though my undergraduate degree were necessary in order to do my work. (For some years, I continued to use them, because I had them to hand; but over the years my fluency dropped as I made more and more use of technology, leaving me more time to focus on things machines could not do.)

But that time I spent mastering the basics was not totally wasted, even in the long run. It was the scaffolding that helped me learn to think mathematically.

So, the traditional, skills approach worked for me. And for many others. But it came at a huge societal cost. The majority of my fellow K-12 schoolmates not only never got to the stage of making that transition to mathematical thinking, they ended up hating math (in some cases being very afraid of it), and dropping it at the first opportunity. Even worse, they ended up with a perception of what mathematics is that is dangerously wrong—a perception they carried with them into parenthood and in many cases a career as a mathematics teacher, ensuring the continued consumption of a product that was long past its sell-by date.

The consequences have been devastating for generations of students. I wrote about this in Devlin’s Angle in June 2010, in a post titled In Math You Have to Remember, In Other Subjects You Can Think About It. Please read that earlier post before you go any further with this essay. If it were not for the human carnage that results from teaching mathematics in a way that works only for a minority of students, there would be little point in my spending time writing this post or you reading it.

Make no mistake, the way our society teaches math, and has done so for generations, absolutely produces enough mathematicians to meet the national need.

On the other hand, because it discards so many, our approach also results in there always being a shortage of high school graduates who, while not mathematicians, have sufficiently adequate mathematical ability to succeed in a world where mathematics plays such a central role. But we sort of make up for that deficit with adult education. (Although ”remedial math” courses for adults can be hampered by having to overcome various degrees of math phobia resulting from a flawed approach at K-12.)

The real problem is the collateral damage the approach inflicts on the majority of students. The ones who are turned off. The majority. For life. It’s a national tragedy. One that, at least today (and in the past, had we proceeded differently, but that’s for another time) CAN be avoided. That’s where we need to go.

The goal is pretty clear. The purpose of education is to prepare the next generation for the lives they will lead. Agreed? So what might that entail for mathematics education? What do our students need to be able to do when they graduate?

Well, with today’s technologies, no professional using mathematics in the world outside the math classroom executes standard procedures “by hand”. Rather, we use all of the available technologies. They are faster than us, way more accurate, and can handle far more variables, and far greater datasets, than any human can. (Most real-world problems that require mathematics today typically have far too many variables to solve by hand.)

Being a mathematician (or a user of mathematics) today is all about using those tools effectively and efficiently. Our mathematics education system needs to produce people with that ability.

To re-use one of my favorite analogies, being a mathematician used to be like playing instruments in an orchestra (with all that entails), whereas today it is like conducting the orchestra. Mathematicians no longer solve problems by following rules in a step-by-step procedure. (Computers do that better.) They apply heuristics—flexible ways of thinking they acquire with repeated practice over time. (Computers cannot do that in any fashion worth discussing—unless you are a computer scientist, when it is a very interesting and challenging research problem encompassing AI, machine learning, and a bunch of other cool stuff.)

Note that word “acquire” in the above paragraph. To the best of my knowledge and experience, you cannot be taught heuristics; you acquire them over time. Mathematical thinking takes a long time to develop. The challenge facing today’s math educators is finding the most efficient way to reach that goal. A way that does not fail, and alienate, the majority of our students. There is, I think, good reason to believe this can be done. What gives rise to my optimism is that another way to express the change in mathematical praxis that I outlined above is:

Today’s mathematician

thinks like a human, rather thancalculates like a computer.

With mastery of computational skills no longer *an entry barrier*, mathematics learning starts to look very much like any other creative subject. (There has never been an educational problem of “English anxiety” or “Art phobia”, right?)

For sure mathematical thinking certainly requires—or seems to require—some experience with computation. (Exactly how much, and to what degree, remains an open question.) In the past, a mathematician had to master both. But even then, the *focus* of the learning had to be on getting to the thinking. Or rather, it should have been. When done by hand, algorithmic calculation is, by its nature, purely routine; clearly not and end in itself. (Though some of us gained pleasure from the activity, and the associated feeling of achievement, while it was still relatively new to us.)

So what is the most efficient way (*today*) to get students to acquire that all important mathematical thinking ability? In my previous post, I continued the passage quoted above with these two paragraphs:

“But here’s the rub. The mass of evidence from research in cognitive science* tells us that the only way to acquire that all important “deep learning” is by prolonged engagement with

somespecific mathematics topics. So, to be effective, any mathematics curriculum has to focus on a small number of specific topics.Yet according to my first remark, there is no set of “most appropriate topics,” at least in terms of subsequent “applicational utility”. So what to do? How should we determine a curriculum?”

[* I will address some of that cognitive science research in a future post. Stay tuned.]

Absent any other overwhelming criteria, it makes sense to pick two or three curriculum topics that are most easily introduced, are at an abstraction level no more than two steps removed from the everyday physical world, and relate most closely to the daily life of the greatest number of students.

That puts arithmetic and geometry at the top of the list. And since the goal is to develop mathematical thinking, which involves handling abstractions and patterns, you need to throw in some elementary algebra as well. (There was a reason those parts of mathematics were developed first!)

Okay, perhaps also a bit about probability and statistical distributions (including creating and interpreting graphs and charts), since they play such a huge role in all our daily lives today. But that’s it. Anything else is optional, and should be dispensed only in small doses. (So the main topics can be studied in depth.)

For sure, no calculus, which has no place in K-12 education. Not least because there is no way it can be done well at that stage. (In large part, because it operates at the third level of abstraction, with its fundamental objects being operations on functions, which are themselves operations on numbers. That’s a huge cognitive leap that takes most of us several years to achieve.) The student who typically has the most trouble with university calculus is the one who has learned it poorly at high school, and comes to Calc 101 at university with a false belief they understand it and can do it, only to crash and burn in Calc 102. (Dealing with that crash scene was a large part of my life for over 25 years!)

Of course, it can be beneficial to *expose *students to various other parts of mathematics. The field of mathematics is one of the great accomplishments of human culture. There is benefit to be gained from showing the next generation some of that intellectual heritage (including calculus, in particular). Partly because it ** is** part of human culture, and a major one at that. But also because it helps them appreciate the enormously wide scope of mathematical applications, providing them a meaningful context and purpose for devoting time to learning the stuff you are asking them to master (which will inescapably require a considerable amount or repetitive practice). But exposure is very different from achieving mastery, and requires a very different teaching approach.

In particular, there is certainly benefit to everyone in the classroom from the teacher showing their students some of the mathematics *they themselves enjoy*. It doesn’t matter whether it is “useful” or not. In terms of the student’s future lives, nothing you show them is likely to be of use to them (in the sense of applying it), as I indicated earlier. In fact it is highly likely that any math the teacher found of use in their life won’t be very relevant to the student’s life a generation later. Things change too quickly. But if the teacher likes it, that enthusiasm should shine through, to the great benefit of the class. It can never be an educational waste of time for a teacher to show the students something they are passionate about.

As to showing students the widespread utility of mathematics, in my experience the best way is to see what is in the news at the time, or to reflect on things going on in our daily lives, and ask the question, “What mathematics is (or might be), involved in that?” There is usually a lot. In the Google Era, it is generally not hard for a well-trained math teacher to find answers to that question.

For example, in a series of Devlin’s Angle posts last year (beginning with this one), I wrote about one example of a short high-school mini-course based on the question, “How does UPS manage to get all those packages to their destination on time?” It involves some fascinating mathematics, well within the reach of a high school student.

Notice that to do this kind of thing does not require the teacher know any of the math involved. What is important is to be able to *find out *about that math! A task that is pretty easy given Google, Wikipedia, and YouTube. (The ability to master a new mathematical technique quickly is one of the most important skills for a mathematician today.)

Sure, you can’t conduct that kind of investigation if you haven’t mastered some mathematical topics well. As I noted earlier, there is no by-passing that step. (Many readers of my articles and social media posts mysteriously seem to skip over that part and get angry at the straw man that results.) But the three topics I listed above (arithmetic, algebra, geometry) do just fine for preparing the groundwork. (If you master the use of a paint brush by practicing with white paint, blue paint, and red paint, it’s not that hard transferring your skill when you find yourself with can of yellow paint, or green, or any other color, including colors that have not yet been developed. Incidentally, this analogy is far more accurate than may appear to anyone who has not progressed sufficiently far in mathematics. When you get well into math, you realize that the entire field is really just color variations on a common theme.)

In the same vein of making educational use of real-life applications of mathematics, back in the 1990s I worked on a six-part PBS television series called Life by the Numbers, where we presented segments on professionals in all walks of life who described how their work used, or depended on, mathematics. Providing further testimony to my initial remark that the mathematics being used at any one time changes rapidly (so virtually nothing a student learns in school will be directly useful when they graduate), I should note that practically all the applications of mathematics we showed in the series—applications chosen because they were cutting-edge in the early 1990s when the series was being made—are no longer being used that way today. In our technology-rich world, mathematical obsolescence can be as rapid as the demise of a pop song. Nevertheless, the series can still provide a useful resource to show *how* mathematics typically gets used.

But to return to the main question, let’s assume you decide to focus K-12 math education on arithmetic, geometry, and some elementary algebra, which is my suggestion (and that of a great many others in the mathematics education world). What is crucial is to teach it in a way that results in *understanding*. If it is approached as acquiring and achieving mastery of a toolkit of techniques, it ain’t gonna work. It really won’t. Teaching rote mastery of (some) tools was defensible in the days when there were no machines to do it. Today, the main benefit of engaging with a particular algorithm or procedure is *as a vehicle for developing mathematical thinking. *[Don’t miss that point. It’s important. That’s part of the cognitive science stuff I promised earlier to cover in a later post.]

But if the focus goes beyond engaging with a small collection of methods, studied in depth for understanding how the mathematics works, and becomes a smorgasbord of techniques touted as a “universal toolkit”, to be carried around and selected from each time a new problem arises, then you are in the tragic world Boaler studied and wrote about.

The absurdity of the “teach math as a toolkit” approach was highlighted to me recently in the tweet (shown below) from a math instructor that landed in my twitter-feed. It has two glaring errors.

Tweet directed to me by a teacher as the culmination of a thread comparing different approaches to teaching math

First, there are no procedures that are universally applicable. All procedures are designed to perform a particular task or set of tasks. As I remarked last month, in my entire fifty-years career as a professional mathematician, I used NONE of the procedures I mastered in my entire high-school career, and hardly any I mastered in my undergraduate career. That is typical. What I relied on, all the time, however, was my ability to think mathematically.

Second, a strategy is not something to be taught, it is something you develop as part of (mathematical) thinking. For sure, if you try to “teach strategies” as some sort of toolkit, the result will be confused students, as the tweeter reported.

(An earlier tweet in the thread I pulled the one above from, suggested to me that the instructor in question had indeed tried to teach mathematical problem solving “strategies” as a menu-accessed toolkit, with predictably disastrous results.)

Of course, you can show students particular strategies as illustrations, but they should be presented as just that: illustrations. The goal is to help the student acquire the ability to *think strategically*, not to “pick strategies from a menu.”

What that particular teacher was missing is that deciding HOW to approach a particular problem is arguably the most critical ability in mathematical problem solving. The teacher’s job is to help the student develop the ability to come up with a strategy. Yes, it may be a strategy they have already seen. Indeed, that is often the case. But it may require an *adaptation* of a known strategy, or the development of an entirely new approach.

In any event, since the list of potential strategies is effectively endless, it is hopeless trying to list them all in a select-from menu. If you do, the result will be, as another contributor to that thread noted, the students will be confused and put off by “the 1,578 different strategies they get pushed on them.”

No art teacher would provide students with a pull-down menu of specific ways to create paintings of different kinds—portraits of men, portraits of women, portraits of children, paintings showing buildings, paintings of rural scenes, paintings of skies, etc. No, they teach the student *how to paint *(which includes helping them learn how to *see*). From which grounding, the student can create and develop their own “strategies” to produce paintings of various kinds.

It’s a math ed twist to the old story about giving a starving person fish to eat as opposed to teaching them how to fish. One has limited value in the moment, the other is a valuable life skill. I suspect those contributors to that Twitter thread had little experience in real-life mathematical problem solving. (Which is why those of us who have should devote time and effort into keeping teachers informed about current praxis.)

At the most fundamental level, the issue is not algorithms versus strategies; it’s about approaching math as the *provision of a toolkit* (OUTDATED), as opposed to developing a *way of thinking* (CRUCIAL). The former, toolkit approach was defensible, and arguably unavoidable, in the millennia before we had tools for procedural math. But in today’s world, the crucial ability to be mastered is mathematical thinking. We need to get there as rapidly as possible, without losing the majority on the way.

To finish, I should note that many other mathematics educators have advocated that the main focus of K-12 mathematics education should be in-depth study of arithmetic, geometry, and a bit of algebra (and little else). For example, Liping Ma, whose approach I wrote about in the latter part of my Devlin’s Angle last October. You may find what she has to say of value. I certainly did.

In the November issue, I followed up on that post by taking the argument further, into the domain of systemic assessment of mathematics. I ended that November post (which was somewhat speculative, though I am part of a team conducting research in that area) with this sentence:

“Of course, I can keep repeating my message. In fact, you can count on me doing that. :) ”

Guess what? I just did!

]]>MAA MathFest will be held in Cincinnati in just a few weeks, and we’re looking forward to seeing colleagues and friends, and together enjoying the wonderful program that has been developed. For us, this has become the premier annual meeting of the MAA community, and we are pleased with the growth of MAA MathFest over the last decade. And the MAA is committed to ensuring that all members of our community are able to participate fully in our meetings.

However in recent years, we have grown in our awareness of the barriers to full participation in our community faced by women and other underrepresented groups. As such, we have also been careful to include language in our contracts for meetings that ensures that all of the venues we use commit to supporting the MAA Welcoming Environment Policy. Even so, the shifting political landscape creates new concerns among our members regarding selection of meeting sites.

The selection of sites for MAA annual meetings is a multi-year process; initial agreements with Cincinnati were completed in 2012. In general, the five- to seven-year lead time in selection of sites for MAA MathFest does not allow us to anticipate changes in the legal landscape. The MAA Meetings Management Committee is closely involved in all final venue and hotel selections, with the MAA Associate Secretary working hand in hand with the Meetings and Events team members to finalize the best possible choices for future years.

Identification of potential meeting locations begins with customized requests for proposals sent to several cities for each MAA MathFest. Our wish list is extensive and grows annually based on MAA member survey results. Member experience and feedback are key to our successful placement, planning, and execution of every national meeting.

Along with member costs of participation and concessions offered to MAA in order to continue to facilitate low registration costs, MAA requires venues to accept certain contract clauses including the MAA Welcoming Environment Policy. Other comparison points include but are not limited to sleeping room and meeting room inventories, specific space capacities for all event types, exhibit hall appeal to exhibitors and registrants alike, and venue and city accessibility for all attendees, in compliance with ADA regulations, which is a federal law.

We also seek out urban settings with modern meeting facilities, affordable guest room options, and retail and dining outlets situated within easy walking distance. Convenient and affordable local transportation options and a city that is accessible from major travel hubs within the continental United States, are also important considerations.

We also want our meetings to be conducive to full participation by all attendees regardless of gender, ethnicity, sexual orientation, or any other legally protected class. This has raised concerns in the past, and as sites for MAA MathFest are chosen 5-7 years in advance, MAA is facing new issues based on current trends in state laws.

During the civil rights era, both the MAA and AMS made positive steps towards integrating national and regional meetings. For example, for many years, when it was Mississippi’s turn to host the annual Louisiana-Mississippi Section meeting, the meeting was held on the Mississippi Gulf Coast, because this was the only place in the state open to holding integrated meetings.

More recently, in 2008, when California voters approved Proposition 8, prohibiting same sex marriage, there was an effort in the MAA Board of Governors to exclude Utah as a potential meeting site, because of the sense that citizens from Utah had funded the campaign supporting Proposition 8. On the other hand, there was no effort to restrict MAA from selecting sites in California! The Board of Governors ultimately chose not to exclude Utah as a potential meeting site, and Proposition 8 was later found unconstitutional.

In 2016, California passed Assembly Bill No. 1887, which asserts "California must take action to avoid supporting or financing discrimination against lesbian, gay, bisexual, and transgender people," and thus, prohibits the use of state funds to support employees traveling to states that are deemed to have laws that conflict with the principle. There are currently 10 states on the list, and this has raised concerns about the ability of our colleagues from California to attend meetings in the prohibited states. (We note that, at least for now, Ohio is not on the list, but Kentucky, where the Cincinnati airport is, is!)

The California law presents an unusual situation for us. Certainly, we want our meetings to be as inclusive as possible. As noted above, we insist that our contracts include language to support the MAA Welcoming Environment Policy. However, we have not limited the prospective list of meeting sites for MAA MathFest based on state laws, which are always in flux, especially in these turbulent times.

One might argue that the California law is unnecessarily creating hardships for their citizens that are counterproductive across multiple fronts. As an example, does traveling to Austin, TX, a liberal enclave in a decidedly conservative state, do more to support the conservative agenda of the State of Texas, or does it favor Austin’s liberal agenda?

Thus, MAA’s position is that, as long as the MAA continues to hold meetings, political calculations should be left on the sidelines as far as selecting meeting sites is concerned. We will continue to make decisions for annual meetings like MAA MathFest based on overall cost considerations, quality of meeting facilities, amenities of the host city, city and venue accessibility, and most importantly,the commitments of our hosts to abide by the MAA Welcoming Environment Policy. We are committed to structuring MAA meetings to align with MAA’s vision of a society that values the power and beauty of mathematics and fully realizes its potential to promote human flourishing.

]]>The NSF Robert Noyce Teacher Scholarship Program provides funding for universities that develop innovative methods of training highly-effective science and mathematics teachers to work with K-12 students in high-need areas. Project Director, Dr. Carla Gerberry, and co-PI, Dr. Mary Stroud, describe some of the unique features of the Noyce Program at Xavier University in Cincinnati, OH.

**Q: What are some of the ways traditional teacher education falls short in preparing teachers to work in high-need areas? **

First, traditional teacher education often neglects to address topics of racism, stereotypes, and the needs of underserved students in meaningful and sustained ways. Second, we often fail to adequately prepare our teachers for high-need classrooms (both rural and urban) and their attendant issues. Third, teacher preparation programs spend little time on current events affecting their populations, such as gun violence, the lack of highly-qualified teachers, and the under-funding of public schools—particularly high-need urban and rural schools. These three things are facets of the larger challenge to provide our future teachers with a realistic view of what the classroom looks like and the struggles that students may have that are not related to classroom learning.

**Q: Describe one or two new aspects of training that you developed to meet the additional needs of prospective teachers of high-need students. **

Breakthrough Cincinnati hosts Noyce scholars for summer internships.

Our program has a summer internship that Noyce scholars complete after their freshman and/or sophomore years, providing them with experience teaching high-need students in grades 3-9. Scholars have two choices: Breakthrough Cincinnati—a collaborative that supports students through a summer school program; or STEM camps run on Xavier’s campus. Both experiences are meant to encourage interns to explore the possibility of a teaching career. If interns decide to become secondary mathematics teachers, they may apply for the Noyce scholarship for their final two undergraduate years.

Throughout the grant period, we run a boot-camp week to prepare our interns for the summer. Interns review classroom management strategies, create a summer teaching plan, learn about current events, and enjoy visits from guest speakers with experience teaching in a variety of schools.

We also provide support for scholars once they become teachers, via an academy that scholars attend during their first two years of teaching. We have informal mentoring for those scholars and frequently host them on campus with their new students as a form of community outreach.

**Q: What transformations do you notice as your Noyce Scholars advance through the program? **

As a result of their Noyce experiences, our scholars demonstrate increased levels of commitment and compassion for their students while gaining confidence in their own abilities to serve as effective teachers. They are better equipped to face and manage the challenges they may encounter while serving in high-need schools and districts. Overall, their experience helps confirm their sincere belief that they can make a positive difference in the classroom.

**Q: How have YOU grown or changed as a result of your involvement with this project?**

The Noyce program has introduced me to a variety of community partners as well as fantastic students. The program has also helped me to evolve in my own ideas about what high-need schools are, the students who attend them, and the needs that are present in those schools. Our project has helped me to see the potential in all students and how to foster their growth in a positive way

The most rewarding and joyous part of having the scholars is seeing them blossom into dedicated and competent teachers who love their students and are committed to their jobs. I also love that they keep coming back to Xavier to involve their students with our support.

*Editor’s note: Q&A responses have been edited for length and clarity. *

**Learn more about NSF DUE 1239995**

Full Project Name: Xavier University Robert Noyce Teacher Scholarship Program

Abstract: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1239995

Project Website: https://www.xavier.edu/noyce/

Project Contact: Carla Gerberry, PI carla.gerberry@xavier.edu

For more information on any of these programs, follow the links, and follow these blog posts! This blog is a project of the Mathematical Association of America, produced with financial support of NSF DUE Grant #1239995.

*Erin Moss is a co-editor of DUE Point and an Associate Professor of Mathematics Education at Millersville University, where she works with undergraduates from all majors as well as graduate students in the M.Ed. in Mathematics program. *

*This is the first in what I anticipate will be an occasional series of guest blog posts on Launchings. Others who wish to submit a guest blog post for consideration should contact me at bressoud@macalester.edu.*

As the nation moves to improve student experiences and outcomes in postsecondary mathematics, the question of measurement arises regularly. Certainly some input variables (class size, institution type, textbook), contextual factors (gender, socio-economic status), and some outcome measures (grade, time-to-degree, major) are straightforward to collect and put into a model. But these do not touch on the messy interactions among students, instructors, content, and the learning environment.

As part of two large NSF-funded studies of introductory mathematics courses, specifically the Precalculus to Calculus 2 sequence, a suite of survey instruments was developed to measure some of these aspects in order to better understand student and instructor perspectives on course activities. These instruments, along with a history of their development and usage, are now available as a white paper for everyone who wishes to see and/or use them.

The first of these projects is Progress through Calculus (PtC; NSF DUE-1430540), which is run in conjunction with the Mathematical Association of America (MAA). The second is Student Engagement in Mathematics through an Institutional Network for Active Learning (SEMINAL; NSF DUE-1624643, 1624610, 1624639 ) and is run in conjunction with the Association of Public and Land-grant Universities (APLU).

Both projects draw on the previous NSF-funded Characteristics of Successful Programs in College Calculus (CSPCC) to guide case studies of introductory university mathematics programs, with the long-term goal of supporting student success in these courses and contributing to students’ successful completion (and enjoyment) of STEM majors. These surveys were developed collaboratively across both projects, and the authorship team of the white paper includes representatives of both projects: Naneh Apkarian, Wendy Smith, Kristen Vroom, Matthew Voigt, and Jessica Gehrtz.

Cover of X-PIPS-M Survey Suite

The surveys are referred to as the X-PIPS-M survey suite, with “M” being for mathematics and “X” the placeholder for versions aimed at students, instructors, and student instructors/TAs (there is also an observation protocol, but the acronym breaks down a little). The PIPS stands for *Postsecondary Instructional Practices Survey*, and is an acknowledgement of these instruments’ heritage. The original PIPS was designed for STEM more broadly (Walter, Henderson, Beach, & Williams, 2016). The X-PIPS-M suite also builds on the intellectual heritage of previous studies of Calculus 1 (Bressoud, Mesa, & Rasmussen, 2016). The instruments are designed to collect data about instructional practice from multiple perspectives in compatible ways so that triangulation is possible – both to reveal consistency and to identify dissonant perceptions that may exist. The white paper we have just completed, which includes the instruments themselves, also details the research supporting each item – whether from an already-developed survey or to address known gaps in prior work.

We are offering these surveys for public consumption and usage in the hopes of supporting practitioners as well as researchers of undergraduate mathematics education. We look forward to seeing how others make use of them!

**References**

Bressoud, D., Mesa, V., & Rasmussen, C. (Eds.). (2015). *Insights and recommendations from the MAA national study of college calculus*. Washington, DC: MAA Press.

Walter, E. M., Henderson, C. R., Beach, A. L., & Williams, C. T. (2016). Introducing the Postsecondary Instructional Practices Survey (PIPS): A concise, interdisciplinary, and easy-to-score survey. *CBE—Life Sciences Education*, *15*(4), ar53. https://doi.org/10.1187/cbe.15-09-0193

This work is supported in part with grants from the National Science Foundation (NSF DUE-1430540, 1624643, 1624610, 1624639) in conjunction with the MAA. All findings and opinions are those of the author and not necessarily of the funding agency.

Naneh Apkarian

*Naneh Apkarian is a post-doctoral research associate of Western Michigan University, at the Center for Research on Instructional Change in Postsecondary Education. She is part of the team of the MAA project Progress through Calculus.*

I am confident in asserting that all of you reading this post have taken a variety of standardized tests over the course of your lives. I would venture to add that, like me, your performance on those tests has influenced the options available to you.

What I am less confident of is whether or not the standardized instruments in widespread use contribute to more accurate assessments of our ability to perform, academically or otherwise.

This Fritz Lang film presented a future dystopia where the elite wield power over the workers.

These questions, and the debates that surround them, were framed nicely in an essay by Thomas Edsall, published on Wednesday, June 12, in *The New York Times*. The essay, entitled “The Meritocracy is Under Siege,” makes clear that there are valid reasons to question the impact of the widespread use of standardized tests perhaps especially when used to assess individuals, but also that there are important reasons to use them as tools for gaining insight into large-scale features of education and other systems that are meant to serve the needs of our society.

Did you know that the word "meritocracy" itself was introduced in a 1958 satirical essay, "The Rise of the Meritocracy," by Michael Young? Young wrote that "merit is equated with intelligence-plus-effort, its possessors are identified at an early age and selected for appropriate intensive education, and there is an obsession with quantification, test-scoring, and qualifications." In the spirit of Orwell's "1984" and Huxley's "Brave New World," Young imagined this trend leading to a dystopian society ruled by those who thrived under such a regime.

Not surprisingly, there has been much written about both the merits and shortcomings of Young’s perspective over the last 60 years, and, as Edsall’s essay demonstrates, those arguments regarding meritocracy are far from concluded.

I am happy that we at the MAA are pursuing a diverse collection of programs that attempt to broaden participation in mathematics, and broaden the conception of what it means to do, and succeed at, mathematics.

At the same time, the MAA is closely aligned with an education system that relies on assessment and grading structures that are far from perfect and, in my view, serve to perpetuate both the positive and negative aspects of our sense of what meritocracy means. We must continue to consider and assess our programs, and evolve and adapt so that our work can better reflect the mission, core values, and vision of the Mathematical Association of America.

A particular way in which we are taking steps to align our work and our values is through our charter membership in the Societies Consortium on Sexual Harassment in STEMM, You may also recall our participation in a national survey on inclusion in STEM, described in this article in the April/May 2018 issue of *MAA FOCUS* magazine. The MAA Board of Directors is in the process of establishing a task force to review MAA’s policies and procedures, such as our Welcoming Environment policy, to strengthen our efforts to remove barriers for full participation in our discipline.

Thanks for joining the MAA in this important work!

]]>The MAA’s calculus studies have highlighted problems with the retention of women in STEM disciplines, especially in the calculus sequence (see the October 2016 Launchings). Because the numbers are so small, we were not able to say anything about the persistence of Black and Latina/o students. But there are recently published and disturbing results on the persistence of these minority students (Riegle-Crumb et al, 2019), based on weighted data from the National Center of Education Statistics’ 2004/09 Beginning Postsecondary Students Longitudinal Study (BPS:04/09).

The authors approached the data with three research questions about students who either began in a 4-year undergraduate program or switched into a 4-year program from a 2-year college. I paraphrase their wording:

Is there a difference between White students and Black or Latina/o students in the rate of persistence in a STEM major? Lack of persistence could occur either by switching to a major outside of STEM (labelled “switch”) or failing to earn an undergraduate degree within six years (labelled “leave”). STEM is defined as a biological science, computer science, engineering, a mathematical science, or a physical science.

If there are differences, can they be explained by factors such as socio-economic status or high school preparation?

How do the findings for students in STEM compare to those for students in business, social sciences, or humanities?

Figure 1 shows that there is very little difference in the percentage of students who choose a STEM major. In answer to the first question and partial answer to the third, Figure 2 shows that there are considerable differences in persistence rates among STEM majors, with less pronounced differences in Business and Social Sciences, and almost none in Humanities. Among STEM majors, Black students are far less likely to persist, with much higher percentages both for switching and leaving (*p* < .001).

**Figure 1. **Choice of college major by race/ethnicity.

Reprinted from Riegle-Crumb et al, 2019, p. 136. Source: BPS:04/04. N = 5,626.

* Indicates a significantly higher percentage of Black students chose to major in business, compared to both White and Latina/o students (*p* < .05).

**Figure 2. **Persistence patterns in chosen field by race/ethnicity.

Reprinted from Riegle-Crumb et al, 2019, p. 137. Source: BPS:04/04. N = 5.626.

**p* < .05, ***p* < .01, ****p* < .001.

To answer question 2, the authors applied multivariate analysis, estimating logistic regression models predicting the likelihood of *switching versus persisting* and *leaving versus persisting* in each of the four categories of majors. Model 1 simply looked at race/ethnicity. Model 2 incorporated a wide variety of factors that included gender, socio-economic status, full or part-time employment, and characteristics of the post-secondary institution. Model 3 added four variables that measure high school preparation: SAT score, High School GPA, having taken Precalculus or Calculus in high school, and having taken four years of science in high school.

The results, which can be found on pages 139–140 of their article, are far too detailed to try to reproduce here. The main takeaway is the answer to question 2. Taking into account all of the additional factors, including high school preparation, the difference between White and Black students for switching remains high (*p *< .01). Between White and Latina/o students the difference essentially disappears once socio-economic factors are taken into account, even before factoring in high school preparation. Differences from White students in terms of leaving remain high for both Black and Latina/o students (*p* < .01), even with all other variables factored in.

It is significant than in the three other categories of majors, differences between White and Black or Latina/o students in either switching or leaving essentially disappear once socio-economic factors have been included. Thus, there appears to be an additional mechanism at work in the STEM fields. The authors suggest that this might be explained by stereotype threat. In their words,

While such spaces [STEM degree programs] are challenging to navigate for most students, minority students experience these spaces while subjected to specific stereotypes about their presumed inferior cognitive and mathematical ability. Put briefly, in STEM contexts, the presence of stereotype threat is likely to be very high. (p. 142)

The authors reference two papers on stereotype threat (Beasley and Fischer, 2012; Woodcock et al, 2012). It occurs whenever a student is aware that a particular group to which she or he belongs is expected to perform at a lower level than the dominant group. This is well-documented to produce additional stress that lowers performance as well as strengthening any uncertainties about belonging in a particular program.

Faculty biases can be subtle, imperceptible to those of us who have them but easily reinforcing stereotype threats to our students. The more we and our students know about them, the better equipped we all are to recognize and deal with them.

**References**

Beasley, M.A. & Fischer, M.J. (2012). Why they leave: The impact of stereotype threat on the attrition of women and minorities from science, math and engineering majors. *Social Psychology of Education*, **15**(4), 427–448. https://link.springer.com/article/10.1007/s11218-012-9185-3

Bressoud, D. (2016). MAA Calculus Study: Women in STEM. *Launchings* *October, 2016*. http://launchings.blogspot.com/2016/10/maa-calculus-study-women-in-stem.html

Riegle-Crumb, C., King, B., & Irizarry, Y. (2019), Does STEM stand out? Examining racial/ethnic gaps in persistence across postsecondary fields. *Educational Researcher*. **48**(3), 133–144. https://journals.sagepub.com/doi/abs/10.3102/0013189X19831006

Woodcock, A., Hernandez, P.R., Estrada, M., & Schultz, P.W. (2012). The consequences of chronic stereotype threat: Domain disidentification and abandonment. *Journal of Personality and Social Psychology*, **103**(4), 635–646. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3779134/

This month’s column is about what topics to cover in high school mathematics. I say that up front because that’s not how I am going to begin. But bear with me. Everything I say will be relevant.

Earlier this month I attended an international mathematics research conference at the Fields Institute in Toronto, Canada. The occasion was the fiftieth anniversary of the (University of) Toronto Set Theory Seminar. I spent three lengthy periods as a visiting faculty member at the University of Toronto in the late 1970s and early 1980s, participating actively in the seminar. Though I left the field in the mid-1980s, the moment I heard about the anniversary celebration, I could not resist registering my interest in attending. Though ** I** had left the field, most of the mathematicians who were part of my mathematical world back then had not, and a fair number of them were going to be present. It would be wonderful to meet up with them again. In the event, the organizers graciously sent me an invitation, together with a request to give both a seminar talk (I had a topic that was relevant, though far from central) and an associated RCI - Fields Institute Public Lecture.

Though an absence from the field of thirty-five years left me unable to follow any of the talks except in a very superficial way, I greatly enjoyed being back in that particular intense, research domain for a few days. I had, after all, been immersed in it for almost a quarter of a century of my life.

A number of talks aroused my interest in particular, being on specific research questions I had worked on. Among them was a talk titled *Analytic Quasi-Orders and Two Forms of Diamond*, given by Prof Assaf Rinot of Bar-Ilan University in Israel. I did not know Prof Rinot (he is much younger than me, from a later generation of set theorists), but I did know his research advisor.

Slide 16 from Prof Rinot’s seminar talk

I struggled (mostly in vain) to follow the details in a talk on a topic I once had at my fingertips. That I had anticipated. What I did not expect was a slide Prof Rinot put up at the very end of his talk (slide 16 of 18), shown here. I had absolutely no recollection either of formulating that rather complex looking definition or proving that theorem. And I had only a very vague sense of what it all meant. (I could not remember exactly what constituted the “non-ineffable case”.) I was also taken aback (though pleased) when Rinot went on to say that the result had recently been observed to be useful in more recent work. For those interested, a video of Prof Rinot’s talk is available on the conference archive.

So why do I recount this personal story? The point of relevance is that the concepts and specific methods in that branch of mathematics are almost entirely separate from anything I studied and mastered either at school or as a mathematics undergraduate, and have only minimal overlap with courses I took as a graduate student. Rather, I acquired all that specialized knowledge and knowhow “on the job,” as I followed various research leads.

After I left set theory in the mid-1980s, I focused on very different mathematical problems in the real world, first in industry, then, after the September 11, 2001 attack on the World Trade Center, in a series of Defense Department projects for the CIA, the US Navy, and the US Army. Again, I acquired the specialist knowledge and knowhow to work in those domains “on the job”. [That was the research I talked about in my conference presentation. See here for an abstract of my talk; at the time of writing, the video of my talk has not yet been processed and published on the conference archives. As you will notice from the talk’s title, my DoD work was inspired by my earlier work in set theory, but was very different in many essential ways.]

The point is, although I have been professionally engaged in mathematics since I began work on my doctoral degree in 1968, essentially *none* of the concepts, definitions, or methods I learned (and practiced) either in school or in my undergraduate mathematics degree played any role in any of that professional work. (That, by the way, is typical.)

You may, therefore, be tempted to think that my entire mathematical education was a waste of time. Not at all. Without it, I could not possibly have done any of that subsequent research in set theory. For what I took away from all those years of struggle in high school and university was *the ability to think mathematically*.

Likewise, the very detailed, expert knowledge of set theory I acquired in my many years research in that field proved to be invaluable in my work for the DoD, even though the two domains are about as far apart as you could imagine. (My work in set theory was about the properties of sets of higher orders of infinity, my work for the DoD focused on improving intelligence analysis.)

[In fact, it was some earlier research I had done based on set theory that led to me being asked to work on those post-9/11 projects. Someone at the DoD had seen the potential relevance to intelligence analysis—actually, far more likely someone at a Defense contractor organization.]

So what does all this have to do with high school mathematics education?

First, *what *is taught is not, in itself, of any significance. The chances of anyone who finds they need to make use of mathematics at some point in their life or career being able to use any specific high school curriculum topic is close to zero. In fact, by the time a student today graduates from university, the mathematics they may find themselves having to use may well have not been developed when they were at school. Such is the pace of change today.

Second, what is crucial to effective math learning is what is sometimes called “deep learning”; the ability to think fluidly and creatively, adapting definitions and techniques already mastered, reasoning by analogy with reasoning that has worked (or not worked) on similar problems in the past), and combining (in a creative fashion) known approaches to a novel situation.

But here’s the rub. The mass of evidence from research in cognitive science tells us that the only way to acquire that all important “deep learning” is by prolonged engagement with *some *specific mathematics topics. So, to be effective, any mathematics curriculum has to focus on a small number of specific topics.

But according to my first remark, there is no set of “most appropriate topics,” at least in terms of subsequent “applicational utility”.

So what to do? How should we determine a curriculum? That’s my topic in next month’s post. (Hint: There are other educationally important criteria besides utility.)

]]>Opening panel at the EDSIN INCLUDES Conference in Boulder, CO on Tuesday, April 2, 2019: James Rattling Leaf, Sr., Cooperative Institute for Research in Environmental Sciences; Gina Helfrich, NumFocus; Cedric Chambers, Jump Recruits; Kaitlyn Stack Whitney, Rochester Institute of Technology; Clyde Cristman, Virginia Department of Conservation and Recreation (not pictured).

In the realm of humanistic mathematics, two important gatherings happened in April 2019. The first was an NSF INCLUDES funded conference called “Bringing the Conversation of Inclusion and Diversity in Data Science to the Ecology and Environmental Science Community,” which seeded EDSIN, the Environmental Data Science Inclusion Network. The second was the second meeting of the Ethics and Mathematics Conference in the UK.

At first glance, one might not see the relationship between these two meetings. Perhaps for this reason, as I sought sponsors for the EDSIN conference, mathematics communities weren’t sure how they should be involved.

While Mathematics as a field includes much more than data science, MAA is still well-poised to support and promote interdisciplinary data science education. Certainly there are a lot of other disciplines poised to mold this area. This conference in particular, attracted many professionals from data science fields in Geographic Informations Systems and Bioinformatics who are serving as models for other fields to transition to using data in their teaching and research.

Carolyn Finney at the EDSIN INCLUDES Conference in Boulder, CO on Tuesday, April 2, 2019 giving her talk “At the Crossroads: Black faces, white spaces, and re-thinking green.”

The EDSIN conference provided a unique space to spotlight environmental justice, the reclamation of indigenous rights, the reparations owed to Black America, and the emerging data divide. Where else in the mathematics community are we inviting these honest conversations?

At the EDSIN conference, Carolyn Finney talked honestly about what it felt to be not just underrepresented, but unheard. She was honest about the consequences in the academy - not just because of who we exclude in our discussion of environmental data, but also ways that we disempower even our own underserved faculty. How does this lack of support, which eventually can manifest into denial of tenure, further add to issues of underrepresentation in positions of power?

Often, individuals with power collect data from individuals with lesser power. Then the data can be used in such as way that it further disempowers those with lesser power and undermines those with the least resource capacity (e.g. Andrejevic 2014). This “data divide” is a key component in the weapons of math destruction I discussed in my previous post about the Data4BlackLives movement and Cathy O’Neil’s book.

Opening presentation for EDSIN NSF INCLUDES Conference in Boulder, CO on Tuesday, April 2nd by Dr. Alycia Crall, National Ecological Observatory.

This data divide is an unstable equilibrium threshold that keeps the “haves” and the “have nots” separate. The power of data is the force that widens the distance between them in positive feedback loops that make the “rich get richer.”

As MAA positions itself as a leader in education concerned with inclusivity, we have to look at the time and spatial scales at which disparities in education operate. These scales are subject to the same systems flow that perpetuates the data divide.

Although our classrooms usually operate on a smaller scale, our entire professional body and broader mathematical community operate on these broad scales. We represent mathematicians all over the country who teach future teachers of all grades, who in turn teach future collectors of data and builders of algorithms. We are practitioners in these industries or in policy positions. We are part of these communities who are disparately affected. To what extent are we obligated to re-examine our role in making sure that data science is emerging with an eye towards ethics and social justice?

Dr. Drew Hasley helps us understand how data visualization and programming can be made more accessible at the EDSIN NSF INCLUDES Conference in Boulder, CO on Wednesday, April 3, 2019. To combine Universal Design for Learning and Inclusive Pedagogy, he suggests we talk about Inclusive Design of our curricular and professional spaces.

As I was mulling these big questions toward the conclusion of the EDSIN conference, I started to see the #EiM2 hashtag from my friends and fellow PRIMUS board colleagues, Dr. Victor Piercey and Dr. Catherine Buell. Several time zones away, the mathematical community was also considering the same broad ethics questions in mathematics.

As the #EDSIN2019 and #EiM2 hashtags brought a lively debate to my Twitter feed, I pondered, what happens now? Where does the math community take this up after these conferences are over?

As the MAA committee chair for minority participation in mathematics, minority participation cannot be untangled from the same flow that maintains the data divide. But this is not a problem that this committee can or should fix. The only thing that can combat systemic racism is a social movement with capacity for change. I know I am not the only mathematician with these concerns and thinking about systemic shifts. It makes me consider how the whole MAA community could be harnessed for the kind of work that needs to be done.

Could the answer be a new Data Science SIGMAA? Could it be a reorientation of the Quantitative Literacy SIGMAA? Should every program review done by an MAA representative look for evidence of ethics in the mathematics curriculum and look for equity and inclusivity in its department? Could it be a grassroots #DisruptDataScience inspired by my colleague, Dr. Harron?

I recently gave a talk as part of a graduate lecture series in Data Science for Biology at the University of Puerto Rico - Río Piedras. I was honored to be the first discipline-based education researcher that they had brought for this lecture series.

Slide by Dr. Carrie Diaz Eaton as part of a presentation which summarized discussions at the EDSIN NSF INCLUDES Conference, presented at La Universidad de San Juan - Río Piedras, April 9, 2019.

The question came up - how do we teach this better? What do we do? At present I have no answers. But I’d like each of us to at least ask the right questions. To what extent does the work we do hold the status quo, widening the data divide? Or does our work seek to actively push against systematic marginalization of our community members? How could each MAA committee, each SIGMAA, each member of the mathematics community work to make sure that a future driven by mathematics, data, and algorithms is a just future?

I want to know, because I want to crowdsource good ideas from our community. Have you thought about the future our algorithms will shape and how are you intentionally been working to make sure it is a future inclusively designed for all of us? Have you organized sessions at national and section meetings? Are you inviting speakers that can speak to both our students and our colleagues on these issues? Are they invited as diversity talks or are they part of an effort to mainstream these discussions? Let us know!

]]>Students often struggle to visualize concepts in multivariable calculus. While technology can help, virtual surfaces can still seem out of reach. Aaron Wangberg of Winona State University describes how this team created dry erase surfaces and supporting materials to enhance students’ interpretation of the mathematical objects they were studying.

**NSF grants are competitive. What do you think it is that set your proposal apart and got the project funded?**

We created contextualized activities with concrete manipulatives to deepen students’ understanding of key concepts in multivariable calculus. There was a clear end goal with a concrete plan to create materials that could be widely disseminated for a broad impact. We had prototypes of the products and promised to disseminate the final materials to 30 institutions. Our proposal included letters of support and names of participants from 20 institutions.

**Where did you get the idea for this project?**

Teaching multivariable calculus, I realized students could use the formula for the gradient, but couldn’t tell me what was important about the resulting vector.

I spent hours taping together bowls and using paper-mache or plaster-of-paris to make surfaces my students could manipulate while exploring partial derivatives and the gradient. When I used these surfaces in class and asked students where the gradient pointed, several groups listed their vector components until one group said “It points uphill!” This spurred a lot of activity as other groups double checked their results on different surfaces. When I asked students to re-measure partial derivatives and re-form the vector after they’d rotated the surface, there was another ‘aha!’ moment when they realized the gradient vector stayed the same relative to the surface even though the numbers had all changed.

I realized then the importance of hands-on manipulatives in helping students identify the key features of a gradient vector. I wanted other students to have the same experience and other instructors to have available the resources I had painstakingly created by hand.

**How did the project evolve from taped bowls and paper-mache to the sophisticated models you have now? **

Ben Johnson, a practicing mold-maker and my former student, advised me how to produce smoother surfaces. Over the course of a year, we developed surfaces that were carved with a CNC machine out of blocks of wood and then finished with white surface that acted like a dry-erase finish. Ben and I also developed several activities exploring multivariable calculus ideas: level curves, partial derivatives, and constrained optimization (Lagrange multipliers). He presented the project at the Joint Mathematics Meetings.

I tried the materials in my multivariable calculus course in order to introduce new concepts and found students were exploring connections between ideas that we wouldn’t typically cover until weeks later.

**What inspired you to apply to this grant program?**

I knew the materials were in high demand. Every time I spoke at a conference, instructors would ask how to get the materials. I had worked with three composite engineering students at Winona State University in order to find plastic materials and a process for turning the wood models into clear plastic dry-erasable surfaces, and then the TUES program provided funding equipment to turn that theoretical process into a concrete, cost-effective application.

Since my graduate advisors had used NSF funding to develop instructional materials for vector calculus and physics, I decided to apply here for funding to expand this project’s impact to where it is today.

**Tell us about the impact this project is having. **

The materials and activities have been created to help students develop productive understandings of multivariable functions, derivatives, and integrals. In the first year of implementation, the materials have been used in over 40 courses with over 1000 students.

Because students don’t have the formulas for the surfaces, they instead have to reason using context and geometric relationships. Outside of the activities, instructors are noting how students are participating in their small groups in class when they use the materials and also noting how students are contributing ideas to the classroom.

One instructor, who used the materials in one of two multivariable calculus courses, noted how students in the course that used the materials were much more inquisitive and asked questions during non-surface days. The materials are impacting teaching styles as well, encouraging discussion and inquiry.

*Are you interested in using these surfaces? Check out their project website **https://raisingcalculus.winona.edu/** for more information and access to resources. *

Editor’s note: Q&A responses have been edited for length and clarity.

**Learn more about NSF DUE #1246094**

Full Project Name: Raising Calculus to the Surface

NSF Abstract Link: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1246094

Project Website: https://raisingcalculus.winona.edu/

Project Contact: Aaron Wangberg, awangberg@winona.edu, Principal Investigator

*For more information on any of these programs, follow the links, and follow these** blog posts**. This blog is a project of the Mathematical Association of America, produced with financial support of NSF DUE Grant #1626337.*

*Audrey Malagon is lead editor of DUE Point and a Batten Associate Professor of Mathematics at Virginia Wesleyan University with research interests in inquiry based and active learning, election security, and Lie algebras. Find her on Twitter** @malagonmath**. *

Examining a first edition copy of Pacioli’s book *Summa *(1494)

What’s your reaction when you see the term “double-entry book-keeping”? Do you associate it with cool, societal-changing innovations like the Internet, Google, social media, laptops, and smartphones? Probably not. Neither did I—until I was asked to write a brief article about the fifteenth century Italian mathematician Luca Pacioli, to go into the sale catalog for the upcoming (June) Christie’s auction of an original first edition of his famous book *Summa de arithmetica, geometria, proportioni et proportionalita *(“Summary of arithmetic, geometry, proportions and proportionality”), published in 1494, which I referred to in last month’s column. (I also gave a talk at a public showing Christie’s organized in San Francisco on April 24, which gave me an opportunity to examine the book myself.)

Sure, I knew what double-entry book-keeping was. Indeed, I spent several days in March going through my QuickBook records as I prepared my annual tax filing. Though by no stretch of the imagination am I in the big-income category, my tax-filing situation is not simple. I have several sources of income from around the world—from my university position and my ed tech startup company BrainQuake; fees from writing, speaking, and consulting; royalties from books; and more recently income from a number of pensions and annuities. As a result, I long ago started keeping meticulous records, using Excel spreadsheets to keep track of individual activities and QuickBooks to bring it all together. Excel is a digital implementation of ninth century commercial arithmetic and algebra, as laid out by al-Khwārizmī in his two famous books on the subjects; QuickBooks an implementation of the book-keeping methods described by Pacioli in Chapter 9 of *Summa*.

Having spent time studying both al-Khwārizmī and Leonardo of Pisa (a.k.a.* *Fibonacci) while I was researching my three books on Leonardo and his hugely influential mathematics text *Liber abbaci* (the project began with a single book in mind), I had long ago come to appreciate the magnitude of the mathematical developments those two authors had catalogued—and thereby contributed to—in terms both of human intellectual progress and the impact of those methods on the way people around the world go about their daily lives. Double-entry book-keeping, on the other hand, the one topic Pacioli covered that those previous authors had not, simply never caught my attention. It’s just keeping records, right? What’s the big deal?

Given my experience with al-Khwārizmī and Leonardo, I should have known better. But there is a reason why people hardly ever give any thought to just how revolutionary, in their time, were numbers (and the associated innovation of money), arithmetic, the Hindu-Arabic representation, the classical arithmetic algorithms, and algebra. Each of those innovations changed human life in such fundamental ways that, once humanity had them we incorporated them (and the products and activities they brought in) into our daily lives to such an extent that we no longer gave them any more thought. Their fundamental role became no more remarkable than the presence of air and water.

The same is true of more recent innovations, such as radio, the telephone, computers, the Internet, laptops, and mobile phones. To those of us who lived through at least some of those innovations, we still think of them as life-changing developments. But ask anyone of school age and it is clear that to them, all of those technologies are just part of the everyday environment. Nothing remarkable. It is a measure of the greatness of any innovation that completely transforms the way we live, that before long we no longer recognize how profound and remarkable it is.

Time, then, to take a fresh look at double-entry bookkeeping.

The benefit of keeping detailed records of financial transactions was recognized back in ancient times. For example, in ancient Rome the first emperor, Augustus, created imperial account books and established a tradition of publishing data from them. While Augustus’ primary purpose may have been propaganda—to publicize his personal spending—he made use of the accounts to plan projects and think about how the empire was managed. According to historian Jacob Soll in his excellent book *The Reckoning*, Augustus’ attention to the accounts enabled Rome to flourish.

But the beginnings of modern bookkeeping came much later, in the emerging city-states of northern Italy in the eleventh century, where the Crusades sparked a massive growth in commercial activity. As trade flourished, merchants in Florence and Venice, in particular, developed a method of accounting that became known as bookkeeping *alla veneziana *(“the Venetian method”).

In their ledgers, the Venetian merchants listed debits and credits in two separate columns. As Pacioli subsequently explained in *Summa*, this was the key to the new form of bookkeeping: “All the creditors must appear in the ledger at the right-hand side, and all the debtors at the left. All entries made in the ledger have to be double entries—that is, if you make one creditor, you must make someone debtor.” Today, we call this “double-entry bookkeeping.”

One important benefit of this system is it provides a built-in error detection tool; if at any moment in time the sum of debits for all accounts does not equal the corresponding sum of credits for all accounts, you know an error has occurred.

In Florence, in the fifteenth century, the bank run by the Medici family adopted double-entry accounting to keep track of the many complex transactions moving through accounts. This enabled the Medici Bank to expand beyond traditional banking activities of the time. It started opening branches in different locations, offered investment opportunities, and made it easy to transfer money across Europe using exchange notes that could be bought in one country and redeemed in another. This growth allowed them to dominate the financial world at a time when Florence was the center of the world for trade and education.

This then was the environment in which Pacioli grew up and lived. As a result, when he set out to write an account of all the commercial mathematics known at the time, his list of contents included one topic that could not be found in Leonardo Pisano’s *Liber abbaci *: book-keeping.

**Pacioli**

Portrait of Luca Pacioli, generally attributed to Jacopo de’ Barbari, ca.1500

Luca Bartolomeo de Pacioli was born between 1446 and 1448 in the Tuscan town of Sansepolcro, where he received an *abbaco *education, the package of commercially-oriented Hindu-Arabic arithmetic, practical geometry, and trigonometry that had been common in Italy since Leonardo published *Liber abbaci*, on which the schooling was based. (I tell that story in my 2011 book The Man of Numbers.) With texts written in the vernacular rather than the Latin used by scholars, *abbaco *focused on the skills required by merchants.

Around 1464, Pacioli moved to Venice, where he worked as a tutor to the three sons of a merchant. It was during this period that he wrote his first book, a short text on arithmetic for the boys he was tutoring.

In 1475, he started teaching in Perugia, first as a private teacher, then, in 1477, becoming the holder of the first chair in mathematics at the university. In 1494, he published his book *Summa*, which made him famous. In 1497, he accepted an invitation from Duke Ludovico Sforza to work in Milan, where he met Leonardo da Vinci, with whom he worked and taught mathematics to until their paths diverged around 1506. Pacioli died at about the age of 70 on 19 June 1517, most likely in Sansepolcro where it is thought he spent his final years.

In addition to *Summa*, published in Venice in 1494, Pacioli wrote a number of other mathematics books:

*Tractatus mathematicus ad discipulos perusinos * (Ms. Vatican Library, Lat. 3129) is a nearly 600-page textbook dedicated to his students at the University of Perugia, where Pacioli taught from 1477 to 1480. It covers merchant arithmetic (barter, exchange, profit, mixing metals, etc.) and algebra.

*De viribus quantitatis* (Ms. Università degli Studi di Bologna, 1496–1508), a treatise on mathematics and magic.

*Geometry * (1509), a Latin translation of Euclid's *Elements*.

*Divina proportione * (written in Milan in 1496–98, published in Venice in 1509). Two versions of the original manuscript are extant, one in the Biblioteca Ambrosiana in Milan, the other in the Bibliothèque Publique et Universitaire in Geneva. The subject was mathematical and artistic proportion, especially the mathematics of the golden ratio and its supposed potential application in architecture. Leonardo da Vinci drew the illustrations of the regular solids in *Divina proportione*, while he lived with and took mathematics lessons from Pacioli. Leonardo's drawings are probably the first illustrations of skeletal solids, which allowed an easy distinction between front and back. The work also discusses the use of perspective by painters.

Two of Pacioli’s books were to have a lasting impact. One was his book on the golden ratio, which gave the initial impetus to a cottage industry of writings that continues to this day, claiming to have identified the number Euclid referred to as the “extreme and mean ratio” in any manner of worldly objects and human artistic creations. While a few of those claims have substance (mostly the ones about the botanical world), the vast majority are entirely spurious, originally no doubt inspired in part by the use of the adjective “divine,” plus the proximity (to Pacioli) of Leonardo da Vinci, and thereafter driven by a dangerous intellectual mix of wishful thinking and mathematical naiveté.

Pacioli’s other influential book was *Summa*. Yet, in many respects, *Summa *is little more than an updated, vernacular version of *Liber abbaci*, which itself was an updated Latin translation of al-Khwārizmī’s Arabic books on arithmetic and algebra. But two factors resulted in *Summa *having a degree of impact that greatly exceeded those two earlier works.

First, thanks to the recent invention of the printing press, *Summa *was the first major *printed *mathematics text, a format that could be duplicated and sold on a wide scale. In the days when manuscripts were hand-written, authors of mathematics texts avoided any use of the abstract symbols they used to do calculations—other than the basic numerals—because they could not rely on accurate copying of formulas and equations by the scribes who made copies. But with print, there was nothing that prevented them having entire pages consist of little else than formulas and equations. (The reason people today associate mathematics with symbols is a result of the printing press. Before then, mathematics was a subject presented in prose.)

Indeed, as I recounted in *The Man of Numbers*, we would today not know about Leonardo’s work and the major role it played in the development of the modern world, were it not for Pacioli’s acknowledgement that *Summa *was based largely on Leonardo’s teachings. The thirteen whole or fragmentary handwritten *Liber abbaci *manuscripts that are now treasured items in the libraries lucky enough to have them would likely still be gathering dust in archives, unseen by modern eyes.

That they are not is due to Pietro Cossali, an Italian mathematician in the late eighteenth century, who came across the reference to Leonardo while studying *Summa *in the course of researching a mathematical history book he was writing. Intrigued by Pacioli’s brief reference to an unknown “Leonardo Pisano” as having been the source for most of the contents of *Summa*, Cossali began to look for the Pisan’s manuscripts, and in due course learned from them of Leonardo’s important contribution. (A French historian subsequently invented a surname for the newly re-discovered Leonardo: “Fibonacci,” and thereby helped give rise to a modern-day mathematics legend.)

To return to my main theme: because it was a print book, *Summa *achieved a far wider readership than *Liber abbaci*, or any of the other handwritten manuscripts that were based on Leonardo’s work. And so its impact was far greater. For that, Pacioli was simply lucky that he wrote his book after the printing press became available.

On the other hand, we can definitely credit Pacioli for the other factor that made *Summa *unique: his inclusion of a chapter on accounting.

As with *Liber abbaci*, *Summa *was more than a business person’s “how to” manual. Both were scholarly mathematical texts, written in the rigorous logical fashion of Euclid’s *Elements*.

*Summa *consists of ten chapters covering essentially all of Renaissance mathematics. The first seven chapters form a summary of arithmetic; chapter 8 explains contemporary algebra (initiating the use of logical argumentation and theorems in studies of the subject); chapter 9 covers various topics relevant to business and trade (including barter, bills of exchange, weights and measures, and bookkeeping); and chapter 10 describes practical geometry and trigonometry. As I noted earlier, none of the methods described are due to Pacioli himself; his contribution, which was significant, was the comprehensive, comprehensible exposition.

Significantly, *Summa *was also the first printed book to codify and give a comprehensive explanation of modern, double-entry bookkeeping, a system of accounting with a long history going back to Jewish bankers in Cairo in the eleventh century (maybe earlier), and used by Italian merchants and bankers, including the Medicis in Florence, throughout the fourteenth and fifteenth centuries.

Pacioli clearly viewed chapter 9 as significant, devoting 150 pages to its coverage of mathematical techniques for business. It is in the section titled *Particularis de computis et scripturis *(“Details of calculation and recording”) that he describes the accounting methods then in use among northern-Italian merchants, including double-entry bookkeeping, trial balances, balance sheets and various other tools still employed by professional accountants today. (The chapter also introduces the “Rule of 72” for predicting an investment’s future value, a technique that anticipated the developmentof the logarithm over a century later.)

In deciding to include a substantial chapter on business mathematics, Pacioli was simply reflecting the local needs of the time, just as al-Khwārizmī wrote his algebra book in response to the changing practices of the merchants around him in ninth century Baghdad, who were developing ways to “scale up” arithmetic to handle multiple trades where the same calculation was being repeated often with different numbers.

In Florence, the Medicis were using double-entry accounting to keep track of the many complex transactions moving through accounts. As a result, the Medici Bank was able to expand beyond traditional banking activities of the time, setting up branches elsewhere and offering customers investment opportunities, as well as making it easy to transfer money across Europe using exchange notes that could be bought in one country and redeemed elsewhere. The Medicis’ mathematically-driven financial expertise enabled them to dominate the financial world at a time when Florence was the center of world trade.

Pacioli’s *Summa *showed others how it was done. He was surely writing a book for which he knew there was a great need. In short, *Summa *did for accounting what *Liber abbaci *had done for Hindu-Arabic arithmetic: it made it go mainstream, presenting it in a way that enabled ordinary people (at least those with some facility with numbers) to master the mathematical techniques required for finance and commerce. That’s the reason Pacioli is sometimes referred to as “the father of accounting.”

When we look back at the development of human society, we tend to see major leaps forward, initiated by a single individual or a small group, with longer periods of more steady progress. The bold initiators who launch society on a new path seem like superhuman geniuses, made of different mental stuff than the rest of us. But, in fact, each major leap forward is always a cumulative effect resulting from many individuals, each making small steps over years, decades, and even centuries. The intellectual giants we see absolutely deserve credit for what they did, but they are still mortals. As one of the greatest “giants” of all, Isaac Newton, famously and revealingly said, “If I have seen further than others, it is by standing upon the shoulders of giants.”

What those giants did that resulted in their names being prominent in the history books is bring together many accumulated small advances, interpret and synthesize them into a whole, and then *package *that whole in a fashion that is readily accessible to others less immersed in the details and the history. In some cases, among them Archimedes (around 250 BCE), Newton (seventeenth century), and Einstein (twentieth century), the influencers brought their own originality into the synthesis.

With others, although their own original work was in some cases significant, it was solely their synthesis and packaging of the work of others that they are known for. Such was the case for Euclid (whose mammoth text *Elements*, ca 350 BCE, established the modern canons of geometry and number theory), al-Khwārizmī (the author of the ninth century text that established algebra as a widely used tool in commerce and then later engineering and science, who we met earlier), Leonardo of Pisa (Fibonacci—also encountered earlier—whose 1202 book *Liber abbaci* brought Hindu-Arabic arithmetic and algebra to the West), and Pacioli with his *Summa*. Though each of these authors produced other books where they presented their own work, it was the breadth and accessible quality of their expository works that changed the course of human history.

The same is true today, with technology. Two giants who changed the world in the 1980s are Steve Jobs and Bill Gates. But neither made any breakthroughs in the design of computers or the creation of software systems. Rather, they took the best of what was available, and packaged it in a way that millions of others could use. In the worlds of science, mathematics, and engineering, the professional cultures direct their admiration towards the innovators who come up with new ideas, and tend to downplay or even dismiss the individuals who package those new ideas into an accessible form that others can use. But both invention and packaging/marketing are required in order to change the world.

It is then, as a “packager” that we must view Pacioli in order to recognize the major impact he had on the course of history.

Because of the power of the recently invented printing press to spread multiple copies of identical texts relatively cheaply and quickly, Pacioli’s book-keeping treatise, as the first printed synthesis of the method, led to a rapid adoption of Venetian book-keeping, and by 1800, use of the system was standard across Europe. But that was not the end of the revolution.

Not long afterwards, the business world found another, far-reaching use for “bookkeeping alla veneziana.” It came about as a result of the desperate efforts of an English potter to prevent his company going bankrupt.

**Josiah Wedgwood **

Today, the name Wedgwood is synonymous with fine pottery, sold all around the world. Less well known, is the major influence this eighteenth century English potter had on mass-market manufacturing in the early days of the industrial revolution.

Born in Staffordshire, England in 1730, Josiah Wedgwood was a highly talented potter and, it turned out, a skillful entrepreneur. Having learned the basic skills of pottery from his father, also a potter, he founded his own company while still very young. That company (the Wedgwood Company) was one of the first to adopt an industrial, mass-production approach to manufacture (and the first to do so for the manufacture of pottery).

By the late 1760s, his traditionally produced, expensive classical designs had found a ready market among the nobility, among them Queen Charlotte (the wife of George III), who he persuaded to grant him permission to refer to his crockery sets as the “Queen’s Ware”. (A smart marketing move.) But Wedgwood wanted more.

In order to grow his company beyond that limited market, he looked for ways to manufacture cheaper sets to sell to the rest of society. This involved both experimenting with different materials and developing ways to produce and sell at scale.

By staying abreast of scientific advances, he was able to adopt materials and methods to both revolutionize the production and improve the quality of his pottery. In particular, his unique glazes began to distinguish his mass-produced wares from anything else on the market.

He also proved to have a flare for marketing, and today he is credited as the inventor of modern marketing techniques such as illustrated catalogues distributed by direct mailings, money-back guarantees, traveling salesmen carrying samples, self-service, and free delivery.

In 1764, he received his first order from abroad. Just three years later, he was able to write of his pottery, “It is amazing how rapidly the use of it has spread all most [*sic*] over the whole Globe.”

Unfortunately, however, that rapid growth brought problems of finance, and by late 1769, Wedgwood and his partner, Thomas Bentley, had serious cash-flow problems and an accumulation of stock. Like many entrepreneurs, too much early success brought him to the edge of bankruptcy.

In response, in 1772 Wedgwood decided to use double-entry book-keeping to examine his firm’s accounts and business practices to see if there was a way for his company to survive. The results proved enlightening and, for the business world, far reaching.

He found that the firm’s pricing was haphazard, its production runs too short to be economical, and that it was spending unexpectedly large amounts on raw materials, labor and other costs, without collecting its bills fast enough to finance expanding production.

Statue of Josiah Wedgwood at the Wedgwood factory in Staffordshire, UK

He also made an important discovery: the distinction between fixed and variable costs. He immediately understood the implications of their difference for the management of his business.

He told Bentley that their greatest costs—modelling and molds, rent, fuel and wages—were fixed: “Consider that these expenses move like clockwork, and are much the same, whether the quantity of goods made be large or small.”

He realized that the more their factory produced, the cheaper these fixed costs would be per unit of production.

In other words, by scrutinizing his books using double entry, Wedgwood had uncovered the commercial benefits of mass production.

To take advantage of his observation, Wesgwood had to take Pacioli’s book-keeping system and apply it beyond its mercantile origins in an exchange economy, to the world of manufacturing, where the emphasis is on the production of goods. That was a major shift, with enormous consequences, both for his company and for the world.

The need to incorporate new elements—labor and materials per unit of production—into an enterprise’s accounting system so that managers could calculate the cost of each unit of production posed significant conceptual difficulties. (That today we don’t give this a moment’s thought, is further indication of how fundamentally Wedgwood’s revolution changed the world.)

The challenge was that the transactions needed to incorporate the manufacturing of products into the existing double-entry system were not financial; they did not involve the exchange of goods, rather activities such as adding the cost of labor acquired or of materials bought. These “non-financial” transactions were new, and to fit them into the 300-year-old accounting system was not easy. Only after a century of factory production had such accounting problems become better understood.

Meanwhile, and not altogether unconnected, the rise of the joint-stock company brought double-entry bookkeeping center stage, giving birth to a new profession: accounting.

The huge amounts of capital expenditure required to build railways—raised from private investors on stock exchanges and managed by joint stock companies—also generated new issues of accounting and accountability.

As a result of all these advances, by the 1860s, accountants in Britain were legally required at every phase of a company’s life. Financial statements had gone from being an incidental product of an enterprise’s book-keeping system in 1800, to being bookkeeping’s raison d’être a century later.

Looking back, we see that Venetian bookkeeping proved to be an ideal system for generating the financial statements that were required for the modern industrialized world. It could accurately record capital and income (as required by law and investors), it could distinguish between private expenses and corporate costs, and it could produce data that helped to evaluate past investment decisions.

It doesn’t get more relevant and important to today’s world than that.

For additional photos from the Christie’s event where I spoke about Pacioli, see the April 27 blogpost on the Stanford Mathematics Outreach Project site.

Read the Devlin’s Angle archive.