
Racial Bias in Health Care Machine Learning Algorithms
Esther Akinpelu '22
Machine learning algorithms in healthcare are racially biased, especially when diagnosing patients and choosing candidates for care management programs.

The Effect of Social Media on Polarization
John Burt '22
This poster attempts to look at the impact of social media on political polarization in the United States. Since the 2016 elections, there has been a shift in the political parties and their constituencies away from the center. Some have argued that a large part of this is due to the algorithms that social media use.

The Mathematics of Misinformation
Philip Cate '22
The spread of false or misleading information online is destructive to our society and democracy. From a mathematical perspective, how do social media networks and algorithms enable the spread of misinformation on the internet, and how can it be fixed?

Accessible and Intuitive Mathematical Notation
Anthony Christiana '22
For math learners with disabilities, engagement with traditional mathematical notation can be difficult or impossible. Here, we explore two examples of the ways that notation can fail disabled students. We also note how accommodating for these disabilities allows us to imagine better ways of serving all students through notational practices.

Ethical Use of Machine Learning in Higher Education Admission
Siqi Fang '23
A machine learning model called GRADE was used for PhD admission at UT Austin from the year 20132020. The model was trained a small set of past admission decisions which are already bias and was used immediately without further tuning or human validation. The model will score all applicants and the decision is made without further human assessment for applicants with the highest and lowest score. Only 362/588 full human reviews are conducted with a few people admitted and the majority of the rest being rejected by algorithm.

CrossSectoral Climate Change Modeling
Jeremy Gordon '22
Although climate data originates from various integrated sectors, it continues to be evaluated in isolation. By using crosssectoral modeling, climate change modeling can more effectively represent data, and the possible consequences of misjudging future models.

Tips for Success in Math Courses
Joshua Horowitz '22
Five steps in the development process for succeeding in mathematics.

Is U.S. News College Ranking a Weapon of Math Destruction?
Zhipeng Kui '22
This poster presents Cathy O’Neil’s arguments from her book Weapons of Math Destruction and connects them with Columbia’s ongoing ranking scandal to show why we should view U.S. News college ranking as a WMD.

How Does FICO Score Discriminates People?
Stephana Lim '22
The FICO Score, the standard credit score, is used as a weapon of mathematical destruction as its algorithm creates a systematic discrimination against a disadvantaged racial group.

Ranked Choice Voting: Who’s the real winner?
Brendan Magill '23
Ranked choice voting is a voting system in which voters rank candidates in order or preference. The winning candidate is determined by eliminating candidates and reassigning their votes until one has a majority. It provides solutions to some of the fundamental issues with traditional voting. But is it more fair? Should it be instituted in American democratic elections?

Belonging in Math and How it’s Affected by Groupwork
Kara Mathes '22
Interactions in groups for class assignments are an important part of a student’s sense of belonging in mathematics and other academic disciplines. This sense of belonging has an impact on the student’s sense of self as well as their academic performance.

The Impossible Theorem of Fairness
Man Nguyen '22
With the growth of machine learning, there has been an increase of machine biases that can cause wrongful discrimination. In the case of implementing “fairness,” several conceptions of bias were created to target a fair system. However, statisticians have found that these conceptions contradict one another. Thus, we run into an impossible conundrum of fairness in machine learning. In cases that high risk, we want to investigate the best fairness measures if one is possible. Moreover, we would like to determine when these fairness measures fail or what conditions must be met for them to succeed.

Gender Discrimination in AI
Nicole Papert '22
There are several studies revealing that AI tools used in job search applications and sites will promote different opportunities depending on the gender of the user. Specifically, men are typically shown more competitive job postings than women, given that the users have similar skills and qualifications. In this poster, I will closely examine why algorithms utilized for the job search process may present differently between men and women.

Reframing the Achievement Gap in Mathematics Education
Madeline Pavolovich '22
Much of math education research focuses on a socalled “achievement gap” measured by test scores and similar quantitative measures of “success.” By framing issues of equity in this way, methods to remedy the gap often require subordinated groups to become more like the “higher achieving” group. In other areas of study, we have been quicker to incorporate sociopolitical lenses into teaching and learning with an emphasis on identity and power. Such a sociopolitical turn could help us rethink what it means for math education to be truly equitable.

Bail Reform
Matthew Shang '22
The Public Safety Assessment (PSA) to assess the likelihood of pretrial risk. Then, the information is used to decide the conditions of release, including components like bail. The factors used are related only to age and criminal history. In short, the algorithm helps judges make more informed decisions.

Experiential Learning in Mathematics
Lara Speer '23
The United States lags behind many other developed countries in math skills. The root of the problem lies not in mathematics itself, but in the way the United States approaches teaching math through memorization. One alternative pedagogical method to memorization is experiential learning, which is the process during which students acquire new knowledge through experience and is meant to reflect the way people learn in the real world.

Bias in Mortgage Approval Algorithms
Carter Steckbeck '22
In recent years, financial institutions are using machine learning algorithms for the mortgage approval process. Utilizing hundreds of complex variables, they decide whether an applicant gets approved. Recent studies have shown that people of color are disproportionately denied at higher rates for mortgages compared to White individuals.

Designing a Microfluidic Sorting Network with Heat Treated Plastic
Houghton Yonge '18, Fuming Qui '19, and Viva R. Horowitz
A microfluidic device is necessary to sort nanodiamonds based on their luminescence. We explored utilizing the repeatable shrinkage of heat treated Shrinky Dink (polystyrene) sheets in an effort to find an easier, cheaper alternative to the traditional photolithography process. Our work found encouraging results but the plastic’s capabilities must be further studied to decisively determine its usefulness.
Printing is not supported at the primary Gallery Thumbnail page. Please first navigate to a specific Image before printing.