Sections

feature / alumni / media-design-practices
February 23, 2018
Writer: Elisabeth Greenbaum Kasson

Artificial Knowing: Media Design alumnae fight discrimination, one algorithm at a time

In an airy, white loft space, in a fantastical art deco factory building in downtown Los Angeles, Media Design Practices (MDP) alumnae Christine Meinders (MFA 2017) and Selwa Sweidan (MFA 2016) are working to gather the data and build prototypes of radically inclusive artificial intelligence (AI) platforms.

The duo, who met in the MDP program, use Meinders’ MFA thesis project as a jumping off point for their work. "My project was about combating discrimination in AI," says Meinders, "and one of the most interesting aspects of my thesis is integral to what we do now, which is use collaborative making to turn the usual AI development and design process on its head."  

Meinders, an AI designer and researcher, who uses collaborative and inclusive design approaches, and Sweidan, a researcher and innovations consultant, who prototypes emerging AI technologies, are among a small, impassioned group of AI technologists who are spearheading much needed change. 

Their work couldn't come at a more critical time. 

Image vy Christine Meinders and Amanda Jensen (MFA 17 Media Design)
Image by Christine Meinders and Amanda Jensen (MFA 17 Media Design)

Selwa Sweidan (left) and Christine Meinders
Selwa Sweidan (left) and Christine Meinders 
Photo: Grace Kim

There are multiple types of knowing. The unheard voices in these spaces bring in new ways of thinking.

Christine Meinders

Artificial intelligence is being dispersed so rapidly across industries and environments, that the technology is outpacing its development. This gap has exposed flaws in the process, which is an issue, as nearly every one of us is, or will soon be, impacted by AI. 

The most crucial weakness is the machine learning algorithms that drive AI are being coded by young, mostly white men in Silicon Valley and other tech hubs. And, as these platforms increasingly replace human decision makers, the homogenous nature of their early development is revealing itself to be inherently racist and sexist. 

AI's lack of inclusiveness is best illustrated by several recent, high-profile gaffes: A Microsoft chatbot named "Tay" uttered racist, misogynist and homophobic slurs; Google Photos' facial recognition software repeatedly identified different photos of an African American couple as "gorrillas"; and Facebook's AI-driven advertising platform targeted itself specifically to people interested in anti-Semitic topics.

To challenge these majority algorithmic authors, Meinders and Sweidan are mining post-human, inclusive feminist knowledge and theory to disrupt their thinking, methodologies and demographics.  

Even their company's name, Artificial Knowing, is a nod to acclaimed technologist Alison Adam, whose 1998 book Artificial Knowing: Gender and the Thinking Machine first sounded the alarm about the male, cis-gendered direction in which AI was moving.

While Artificial Knowing is an AI innovation consultancy that offers AI design research to a corporate clientele, and it researches and develops its own proprietary methodologies, a large part of its research practice is public-facing, and deeply invested in the embodied knowledge found in diverse communities. 

The two are determined to bring more voices into the space by introducing AI to underserved communities, and actively facilitating civic discourse with groups, via training programs. Their goal is to empower participants to take control of systems that have been unavailable to them, nurture emerging methodologies, and collect data sets that can be shared. 

To work with a community, Meinders and Sweidan have to be invited, usually via contacts at non-profit organizations. After a series of meetings with community members, where local needs are determined, a course of action is decided upon, and a product begins to take shape. The participants are then given project appropriate technology, and are trained to create and cultivate their own AI.

Inclusion should really test the entire research process. In order to innovate, we have to rethink the point of references, the thinking, the briefs, and the teams.

Selwa Sweidan

"There are multiple types of knowing," says Meinders, who notes that the programs are also where they can observe how these embodied knowledge bases inform and illuminate data. "The unheard voices in these spaces bring in new ways of thinking. It's about making space for multiple voices to design intelligence systems." 

With that in mind, Sweidan is quick to point out that embodied knowledge, and embodied research, from communities outside the tech world, is often dismissed as a lesser point of reference by AI gatekeepers. In actuality, it should be foundational. It may be blasphemy in Silicon Valley, but per Sweidan and Meinders, inclusion, not code, must be the starting point. 

By placing inclusion at the start and heart of the process, makers would bring broad and deep sets of experiences to the table as they begin to ideate projects. According to Meinders, anything short of that would be akin to putting on a band-aid after the wound has healed. 

Sweidan mentions that the current approach to inclusion often avoids complexity altogether and instead tacks on specific language at the culmination of the project. "Inclusion should really test the entire research process," she adds. "In order to innovate, we have to rethink the point of references, the thinking, the briefs, and the teams. We're 'updating' these design research practices by challenging the aesthetics and the thinking behind the computation."

Shifting perspective doesn't just give a voice to a greater demographic either, it reimagines how products are created for the marketplace. "When you engage in more collaborative making, listen to unheard voices, and consider different ways of thinking, instead of focusing on product driven making," stresses Meinders, "you could come up with many different ways to create multiple products from a variety of concepts, instead of just one that you've already decided upon."

If AI is to represent all its users, developers in the space have to ask who they're really designing for, and then go further. The needs of a demographic in a single environment are hardly standardized. Breaking down the group into subsets would eventually serve to effectively engage the greatest number of people. After determining the real users, Meinders and Sweidan suggest makers go even deeper by asking themselves, what are we really making, why are we making it and how can we improve it? 

The implications of Artificial Knowing’s work are best illustrated by a research project in which they were invited by local performance artists to build a movement-based AI piece with community members. 

The researchers decided to focus on the participants' hands, since using hand movements made the physical performance accessible to the entire group. They then set up a Wekinator, an open-source computer software program, that allows anyone to use machine learning to build interactive systems through human movement and computer responses. One by one, the community members were encouraged to improvise their own hand movements in front of the computer, which created a continuous accumulation choreography.  

But they didn't just ask the participants to move their hands around. They also asked them what they thought about the process itself. And they asked them how they felt about the machine learning their movements and then using those gestures to influence the next person's improvisation. 

"We ended up having conversations about what it means to have this machine intelligence and embodied intelligence coming together," Sweidan says. "It allowed us to declare this a site-specific data set. We documented and demonstrated what it means to capture knowledge in a specific community, and specific location that's tied to the community. The participants were also witnessing each other being part of the discourse, and part of the thinking. That's what it means to be inclusive."

While site-specific training is fundamentally a democratization of the process, taken further it evidences a community shaping elements of their environment via literal and cultural inputs. The data set collected from the hand improvisation event could potentially influence the development of any AI that involves the use of hands, such as an autonomous vehicle's dashboard, a game console or a medical device.

Meinders is also careful to note the importance of diversity within every community. Different body types, skin tones, speech patterns, etc. should continue to inform the data, and help shape the code during the ideation process, which in turn, will build a significantly more knowledgeable AI. "It's the outliers that can throw the data into a very exciting place," she enthuses. "The hands of a community are never going to be uniform."

Meinders and Sweidan will appear at SXSW as part of its AI x Radical Inclusion session on March 11, 2018. At this event, Sweidan will moderate a panel discussion featuring Meinders and an interdisciplinary group of experts on the challenges of inclusion in the AI design space, cultural design implications of the tools and approaches to AI creation, and how speculative approaches address the cultural and social perspectives in AI.

Los Angeles-based Elisabeth Greenbaum Kasson covers the collision of culture, technology and business. Her work appears in the Los Angeles Times, Documentary magazine, Los Angeles magazine and other publications.