In an era where machine learning (ML) models drive decisions across industries—from credit scoring to job recruitment—questions of transparency and fairness have never been more urgent. In her recent Gurobi webinar, Professor Dolores Romero Morales, Professor of Operations Research at Copenhagen Business School and president-elect of the Association of European Operational Research Societies (EURO), led us on a thoughtful and practical tour of how mathematical optimization can bring both clarity and equity to the world of ML.
Her talk was divided into three parts: transparency in ML, fairness in ML, and counterfactual analysis—a rapidly evolving area of research. What connected all three themes was a clear message: Optimization isn’t just useful in building better models; it’s essential for ensuring those models are understandable and just.
Professor Romero Morales began by emphasizing the growing demand for transparent machine learning models. While accuracy is a key metric for any ML system, transparency ensures that users can understand how a model arrived at a decision. This is particularly important in high-stakes domains such as finance and healthcare, where regulations increasingly require explainable AI.
Optimization plays a key role in enabling this transparency. Many common ML models, such as decision trees and ensembles, can be reformulated with optimization methods to enhance their interpretability. Citing research in the European Journal of Operations Research, Romero Morales explained how various models can be analyzed and adjusted to balance performance with transparency.
This isn’t just a theoretical concern. Regulatory frameworks, particularly in the European Union, are pushing for more accountability in algorithmic decision-making. Optimization can help organizations meet these demands by explicitly encoding transparency constraints into their models.
Next, Professor Romero Morales focused on fairness in machine learning, where she explored how to prevent models from discriminating against individuals based on sensitive attributes—like gender, age, or race. “When I was talking about fairness today,” she noted, “I was assuming that there is a sensitive attribute that I know in advance, and that I want to avoid discriminating against.”
But even when sensitive variables are excluded from a model, indirect discrimination can still occur. For instance, gender might be correlated with income or working hours, meaning models can still reflect gender bias even when gender is removed as a feature. This calls for more sophisticated approaches—ones that can search across a range of potentially biased attributes. Optimization helps address this by enabling the combinatorial search required to identify and mitigate such hidden biases.
A practical challenge that often arises in this space is model scalability. When asked about the size of models her methods could handle, Romero Morales noted that “it depends.” Lighter models like explainable tree ensembles are more tractable, while large-scale forests or neural networks require compromises, such as using surrogate continuous variables instead of binary decisions, to keep computations manageable.
The third and final section addressed counterfactual analysis, which has gained significant traction in recent years. This technique asks, What would need to change for a different outcome to occur? For instance, if a loan application was denied, what variables could the applicant change to get approval?
Professor Romero Morales and her team are contributing to this field by using optimization to generate counterfactual explanations that are realistic, actionable, and fair. The idea is not just to generate “any” counterfactual, but to produce those that respect constraints and avoid introducing new forms of bias.
This type of analysis can be resource-intensive, especially when applied to complex models. Yet again, optimization provides a framework for navigating this computational challenge—balancing the need for precision with practical tractability.
The importance of these topics extends beyond research and regulation—they’re also essential for education. In response to an audience question, Professor Romero Morales shared how she incorporates transparency and fairness into her teaching. While she doesn’t dedicate an entire course to the topic, she consistently emphasizes trade-offs between model accuracy, interpretability, and fairness. “I always like to tell the students that we make our mathematical optimization more complex to ensure that our machine learning model is less opaque,” she said.
One attendee asked a pertinent question: “Is optimization even needed anymore, now that we have generative AI?” Professor Romero Morales responded with both humility and conviction. While acknowledging the power of generative tools, she stressed that they are complementary—not replacements—for the structured, goal-oriented approaches of optimization.
“I love the flexibility that operations research gives you,” she said. “I always like to see the combination of the human and the computer.”
Professor Romero Morales’ presentation was a powerful reminder that optimization is not just a technical tool. It’s a lens through which we can build more transparent, fair, and ultimately more trustworthy machine learning systems. As the regulatory landscape evolves and the societal impact of ML grows, organizations must go beyond accuracy and performance. They must also prioritize clarity, equity, and accountability.
Thanks to researchers like Professor Romero Morales, and technologies like Gurobi, we now have the mathematical and computational tools to do just that.
To dive deeper into how optimization can drive transparency and fairness in machine learning, watch the full webinar featuring Professor Dolores Romero Morales. Click here to view the session.
Senior Director of Academic Programs
Senior Director of Academic Programs
Lindsay brings over 13 years of experience working at the intersection of technology and education. Prior to Gurobi, Lindsay worked as an Operations leader at Opex Analytics, a product and services firm dedicated to solving complex business problems using the power of Artificial Intelligence. While there, she focused on growth and business development, product launch, and marketing. Lindsay spent 10 years working in various leadership capacities at Universities including Columbia University, Northwestern University, and the University of Chicago. From 2013 to 2017, she worked to establish and grow the Master of Science in Analytics degree at Northwestern University’s School of Engineering, the program was one of the earliest MS degrees focused on an applied data science curriculum. During her time with Northwestern, she managed external relations and corporate relations, helped hire and onboard new faculty and subject-matter experts in various disciplines of analytics, directed recruiting efforts/admissions/student advising, and managed a team of administrative professionals. Prior to Northwestern, she spent over 5 years working in Advancement at Columbia University’s School of Engineering and Applied Science. She completed her Bachelor’s Degree in English and Fine Art at Sewanee: The University of the South and her Master’s Degree in Nonprofit Management at Columbia University.
Lindsay brings over 13 years of experience working at the intersection of technology and education. Prior to Gurobi, Lindsay worked as an Operations leader at Opex Analytics, a product and services firm dedicated to solving complex business problems using the power of Artificial Intelligence. While there, she focused on growth and business development, product launch, and marketing. Lindsay spent 10 years working in various leadership capacities at Universities including Columbia University, Northwestern University, and the University of Chicago. From 2013 to 2017, she worked to establish and grow the Master of Science in Analytics degree at Northwestern University’s School of Engineering, the program was one of the earliest MS degrees focused on an applied data science curriculum. During her time with Northwestern, she managed external relations and corporate relations, helped hire and onboard new faculty and subject-matter experts in various disciplines of analytics, directed recruiting efforts/admissions/student advising, and managed a team of administrative professionals. Prior to Northwestern, she spent over 5 years working in Advancement at Columbia University’s School of Engineering and Applied Science. She completed her Bachelor’s Degree in English and Fine Art at Sewanee: The University of the South and her Master’s Degree in Nonprofit Management at Columbia University.
Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.
Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.