Report: AI for Good

Share on Facebook Share on Twitter Share on LinkedIn Share in Email Print article
Written by Samantha Thorne

Presenters: David Monsma (Executive Director, The Aspen Institute), Anil Gupta Digman (Chair in Strategy, Globalization, & Entrepreneurship, University of Maryland), Charlotte Stanton (Silicon Valley Director, Carnegie Endowment for International Peace), Tess Posner (CEO, AI4ALL), Dekai Wu (Professor, Hong Kong University of Science and Technology), Michl Binderbauer (Co-founder & CTO, TAE Technologies), Alpesh Shah (Senior Director, IEEE Standards Association)

To read the full report click here for the digital edition.

Artificial intelligence has the immense potential to better mankind. As people perceive humanity’s inherent goodness differently—ranging from Aristotelian, Hobbesian and Rousseauian perceptions—one questions what role government and social contracts play in the new technological era and whether artificial intelligence’s involvement will exacerbate or alleviate human shortcomings. But how can artificial intelligence be deployed in a favorable manner to better mankind? Since “better” and “good” are ambiguous terms, they require more concrete definitions and practical measures to achieve “AI for good.”

The “AI for Good” panel, with David Monsma as its moderator, set out to define good AI applications and prescribe ways to achieve positive AI assimilation in society. The speakers recognized technology’s quickening pace as the world currently experiences a three-part evolution between humans, hybrids and machines. There is now more pressure placed on individuals, communities, enterprises and governments to avoid AI spillovers and misuses. To ensure that artificial intelligence is positively deployed in human society, technology must be human-centric, diverse, empathetic and environmentally sustainable.


AI development must prioritize inclusivity and diversity. Inclusive artificial intelligence technologies and applications ensure that AI reaches its maximum potential in a responsible and equitable manner. To achieve inclusivity, the tech community and its partners must prioritize diversity. Educating the next generation of AI technologists will effectively ensure that artificial intelligence’s benefits apply to all facets of society. By focusing on educating underrepresented groups and the rising generation of thought leaders in AI, society overcomes technological exclusivity and inequality.

Good AI avoids divisiveness and inequality. Artificial intelligence for good endeavors to dissolve divisive and unequal applications of technology. Tess Posner notes that AI technology should apply to all groups within society to pursue more equal and diverse methods to problem-solving. As AI enables machines to have opinions that can change our own, it carries a lot of clout—AI is not merely a mechanical tool or a passive servant. Cambridge Analytica, Charlotte Stanton notes, serves as an example of divisive AI misuse and reveals that there is a price to pay when AI goes wrong. Dekai Wu views it is artificial intelligence’s duty to counteract such divisive technological deployments by increasing empathy.

AI education programs will overcome tech inequalities. AI4ALL takes the mission of fostering AI diversity and inclusivity to heart when educating the next generation of technologists. The organization’s education programs and summer camps provide AI development training to underrepresented students. By applying their different backgrounds and perspectives to building technologies and overcoming barriers they see in the world, young AI developers create an inclusive technological environment. Whether it is ensuring privacy, improving how programming is explained, or mitigating bias risks, AI4ALL programs highlight challenges and considerations that the rising generation should recognize when building technology from the ground up.

Community engagement plays a role in making AI equitable. Communities help foster beneficial AI environments. When providing students with an artificial intelligence education, AI4ALL focuses on not only improving particular communities through its programs, but also having the students seek to apply their developed technologies in their own communities to solve issues they personally experience. Posner underscored one high school girl’s AI project. As the daughter of rural farmers, the student’s project related to providing clean water in her community which faced water sanitation issues. Furthermore, a community approach to AI can serve as a social safety net when the market does not exist for certain AI principles or technologies. Specifically, Wu espouses the idea that beneficial technologies cannot always be monetized and that communities should play a role in AI work since often, incentives won’t come from individual investment firms.

Ethics courses are necessary but insufficient. To achieve AI for good, technologists need to focus on ethics. Ethically-aligned design work will ensure that artificial intelligence developers prioritize positive applications from the beginning. As the tech culture hyper accelerates from an exponential increase in the technology of “artificial children,” this puts more pressure on engineers to ensure they are morally-focused throughout all stages of AI implementation. To guarantee that technology prioritizes ethics, education and the market play a significant role.

Engineers should receive a core ethics training. Similar to how business students must take core ethic classes during their undergraduate and graduate degrees, Anil Gupta proposes the same should apply to engineers and computer scientists. This provides a bottom-up approach to improving AI ethics and focuses on the rising generations role in AI development. Ethics in technology needs to become a part of the rhetoric in achieving positive AI and such an institutional approach can help that.

Ethics need to become opportunistic in the market. Though ethics training will not detract from improving AI’s morality and positive uses, an economic focus supplements academic approach to practically promote ethics in artificial intelligence. Alpesh Shah notes that tech companies and startups are more likely to adopt a human-centered and ethical approach if incentivized by tax cuts and a smart tax structure. Though AI ethics and market incentives are currently misaligned, the economic and tech communities must work together to coordinate market forces with improved AI ethics. So while there is skepticism regarding the efficacy of ethics classes and its translation into practice, adding market incentives can help promote AI ethics. And as Wu and Posner recognize, communities should step in when market forces fall short of promoting technological ethics.

AI needs parenting. The manner in which we interact with technology and artificial intelligence will have a lasting effect as AI heavily impacts everyone’s lives, including the lives of children. As AI for good focuses on making AI technologies and applications ethical, focusing on morally interacting with technologies and devices when parenting the rising generation will not only ensure they benefit fully from AI’s potential, but will alter the way that AI interacts with humanity.

Parents need to raise children to properly interface with AI. Ethical behavior starts in the home. Children observe their parents and adopt their parents’ attitudes, mannerisms and practices. Thus, it is the parents’ responsibility to instill a culture, values and ethics in a child. To ensure that AI is morally applied for beneficial purposes in the future—when AI is at the forefront of society—parents should be conscientious when interacting with their AI devices.

Artificial intelligence matures throughout its life. Similar to how children absorb practices and habits from their parents, machine learning collects data from its users and surrounding environment; using that information to adjust its own behavior. As machine learning matures and becomes more autonomous, it eventually reaches a point where its intelligence is no longer artificial, but as Shah recognizes, becomes organic. The “artificial children” have moved out of the house. Thus, in AI’s early stages—the design process and early user interactions—humans need to recognize that the technology will likely make mistakes. But it is important that AI is trained to learn from past aberrations to alert when undesirable patterns and dangers emerge.

Individuals need to determine technological ground rules. In an effort to rear technology, AI developers must implement ground rules. Michl Binderbauer notes that identifying what humans want out of AI and the technology’s end purpose will help advance human knowledge generation and achieve TAE Technologies’ mission to accelerate learning and AI processes. This places pressure on individuals and societies to identify key technological ground rules, requiring both a mindset shift to thinking about AI holistically and collaboratively. Rearing technology takes a village of various partners across sectors, industries and borders in deciding what rules to apply to AI.

Collaboration across borders, sectors and industries will improve AI. AI is collaborative. If a user cannot collaborate with AI, then the technology and device are not good. As this litmus test applies to the technology itself, it also applies to collaborative efforts across various industries, sectors and nations. AI will overcome more challenges facing the world in a positive manner as more individuals play a role in its implementation and design going forward.

Data needs to be unlocked. A lot of data is locked, adversely affecting the entire ecosystem as corporations reduce their R&D budgets to use startup companies as external incubators. This data lockup is also prevalent between countries as data nationalism pervades digital globalization. Rather, the expansive and high-paced nature of AI calls for a change in governance and a heightened sense of multilateralism via soft laws and treaties, as well as cross-border data flows. Unlocking data both between companies and countries will allow society to keep up with AI’s changing environment and global nature.

China and the US have different approaches to developing AI. Both countries and their governments approach AI separately, with China applying top-down policies and the United States proponing market forces. China’s more conservative methods and the United States’ reactive approach to AI policy will play out differently within each country, according to Wu. Since the question of AI is a question about humanity itself, though the technology will evolve differently depending on cultural norms, collaboration despite these differences will enrich both nations as well as AI for good moving forward.