Artificial Intelligence as a Weapon for Hate and Racism

artificial intelligence

The stunning advancement of artificial intelligence and machine learning has brought advances in society. These technologies have improved medicine and how quickly doctors can diagnose disease, for example. IBM’s AI platform Watson helps reduce water waste in drought stricken areas. AI even entertains us—the more you use Netflix, the more it learns what your viewing preferences are and makes suggestions based on what you like to watch.

However, there is a very dark side to AI, and it’s worrying many social scientists and some in the tech industry. These people say it’s even more troublesome that AI and machine learning are advancing so fast during these current times.

In an insightful session at SXSW, Kate Crawford, a principal researcher at Microsoft Research, offered some very disturbing scenarios with AI.

 

AI and the Rise of Fascism

 

“Just as we see AI advancing, something is happening; the rise of nationalism, of right-wing imperialism, and fascism,” said Crawford. “It’s happening here in the U.S., but it’s also happening in Spain, Germany, in France[…]The turn to authoritarianism is very different in every one of these countries, but as political scientists have pointed out, they have some shared characteristics: […] the desire to centralize power, to track populations and demonize outsiders, and to claim authority and neutrality without being held accountable.”

How does AI factor into this? According to Crawford, “AI is really, really good at centralizing power; at claiming a type of scientific neutrality without being transparent. And this matters, because we are witnessing the historic rise in an anti-democratic political logic.”

Crawford pointed out an example of a startup that is using AI and facial recognition to detect terrorists’ faces. The startup is called Faception. She likens this use of AI to the pseudoscience of phrenology—the study of facial and skull features to determine personality traits. “These kinds of debunked scientific practices were used to justify the mass murdering of Jews and slavery in the U.S.,” Crawford said.

“I think it’s worrying we’re seeing these things from the past get a rerun in AI studies,” Crawford told the audience. “Essentially, AI phrenology is on the rise at the same time as the re-rise of authoritarianism. Because, even great tools can be misapplied and can be used to produce the wrong conclusions, and that can be disastrous, if used [by those] who want to centralize their power and erase their accountability.”

 

Human Bias Comes Into Play

 

Machines are increasingly being given the same kinds of tasks; to make certain predictions about segments of the population, often based on visual algorithms. During her discussion, Crawford demonstrated how visual algorithms can produce very incorrect and biased results. She refers to the data upon which this type of facial recognition/machine learning systems is based as “human-trained.”

“Human-trained data contains all of our biases and stereotypes,” she said. Crawford also said that AI and machine learning can be used in ways we don’t even realize. “Say, for example, a car insurer that wants to look at people’s Facebook posts. If [a person] is using exclamation marks [in their posts], the insurer might charge them [more] for their car insurance, because exclamations mean you are a little bit rash.”

 

AI and the Police State

 

She said the biases and errors of AI get dangerous when they become intertwined into social institutions like the justice system. She cited problems with an emerging form of machine learning, predictive policing.

“Police systems ingest huge amounts of historical crime data as a way of predicting where future crime might happen, where the hotspots will be,” she explained. “But, they have this unfortunate side effect; the neighborhoods that have had the worst policing in the past, are the ones that are coming out as the future hotspots each time. So, you end up in this viscous circle where the most policed areas [now] become the most policed areas in the future.”

Crawford said that a study done on Chicago’s predictive policing efforts showed that the technology was “completely ineffective at predicting future crime.” The only thing it did was increase harassment of people in hotspot areas.

She ended the discussion by stating the need for a new resistance movement that actively monitors and brings awareness of the ways in which AI can harm society, especially in the hands of dictators or those who would use the technology to manipulate others.

 

 

Artificial Intelligence as a Weapon for Hate and Racism

artificial intelligence

The stunning advancement of artificial intelligence and machine learning has brought advances in society. These technologies have improved medicine and how quickly doctors can diagnose disease, for example. IBM’s AI platform Watson helps reduce water waste in drought stricken areas. AI even entertains us—the more you use Netflix, the more it learns what your viewing preferences are and makes suggestions based on what you like to watch.

However, there is a very dark side to AI, and it’s worrying many social scientists and some in the tech industry. These people say it’s even more troublesome that AI and machine learning are advancing so fast during these current times.

In an insightful session at SXSW, Kate Crawford, a principal researcher at Microsoft Research, offered some very disturbing scenarios with AI.

 

AI and the Rise of Fascism

 

“Just as we see AI advancing, something is happening; the rise of nationalism, of right-wing imperialism, and fascism,” said Crawford. “It’s happening here in the U.S., but it’s also happening in Spain, Germany, in France[…]The turn to authoritarianism is very different in every one of these countries, but as political scientists have pointed out, they have some shared characteristics: […] the desire to centralize power, to track populations and demonize outsiders, and to claim authority and neutrality without being held accountable.”

How does AI factor into this? According to Crawford, “AI is really, really good at centralizing power; at claiming a type of scientific neutrality without being transparent. And this matters, because we are witnessing the historic rise in an anti-democratic political logic.”

Crawford pointed out an example of a startup that is using AI and facial recognition to detect terrorists’ faces. The startup is called Faception. She likens this use of AI to the pseudoscience of phrenology—the study of facial and skull features to determine personality traits. “These kinds of debunked scientific practices were used to justify the mass murdering of Jews and slavery in the U.S.,” Crawford said.

“I think it’s worrying we’re seeing these things from the past get a rerun in AI studies,” Crawford told the audience. “Essentially, AI phrenology is on the rise at the same time as the re-rise of authoritarianism. Because, even great tools can be misapplied and can be used to produce the wrong conclusions, and that can be disastrous, if used [by those] who want to centralize their power and erase their accountability.”

 

Human Bias Comes Into Play

 

Machines are increasingly being given the same kinds of tasks; to make certain predictions about segments of the population, often based on visual algorithms. During her discussion, Crawford demonstrated how visual algorithms can produce very incorrect and biased results. She refers to the data upon which this type of facial recognition/machine learning systems is based as “human-trained.”

“Human-trained data contains all of our biases and stereotypes,” she said. Crawford also said that AI and machine learning can be used in ways we don’t even realize. “Say, for example, a car insurer that wants to look at people’s Facebook posts. If [a person] is using exclamation marks [in their posts], the insurer might charge them [more] for their car insurance, because exclamations mean you are a little bit rash.”

 

AI and the Police State

 

She said the biases and errors of AI get dangerous when they become intertwined into social institutions like the justice system. She cited problems with an emerging form of machine learning, predictive policing.

“Police systems ingest huge amounts of historical crime data as a way of predicting where future crime might happen, where the hotspots will be,” she explained. “But, they have this unfortunate side effect; the neighborhoods that have had the worst policing in the past, are the ones that are coming out as the future hotspots each time. So, you end up in this viscous circle where the most policed areas [now] become the most policed areas in the future.”

Crawford said that a study done on Chicago’s predictive policing efforts showed that the technology was “completely ineffective at predicting future crime.” The only thing it did was increase harassment of people in hotspot areas.

She ended the discussion by stating the need for a new resistance movement that actively monitors and brings awareness of the ways in which AI can harm society, especially in the hands of dictators or those who would use the technology to manipulate others.

 

 

“iRobot” is Now a Very Real (And Scary) Thing – Watch!

iRobot

A showstopping exhibit at SXSW this year did not feature humans. Rather, it was of two humanoid robots having a discussion.

The conversation took place between “Skeleton,” an “exo-naked robot,” which has the same eerie look of the robots in the Will Smith movie iRobot, and a female humanoid robot named “android U.” The robots discussed the skills needed to make ramen versus sushi, in very natural human dialogue.

Their discussion was an example of an advanced discussion dialogue system that is the result of collaboration between two Japanese researchers. Dr. Ishiguro Hiroshi is a lauded artificial intelligence roboticist from Osaka University and Dr. Ryuichiro Higashinaka is a scientist who specializes in human communication and artificial intelligence. He is with NTT Communication Science Laboratories.

The robots are actually listening to one another and then making responses based on what the other stated. The discussion dialogue system uses NTT’s proprietary voice recognition technology.

The exhibit was part of the “Japan Factory” showcase that featured the most cutting-edge technology and science from the island nation. Last year, the Japanese tech floorshow welcomed over 4,000 visitors.

“In this age, robots can talk like humans, and virtual reality is able to turn fantasy into a reality,” said Kumiko Kitamura, executive producer of Japan Factory stated in a press release.

“The fusion of humans and robots, the real and virtual, technology and craftsmanship, and consciousness and the subconscious all happens synchronously. Echoing this trend, Japan Factory wants to reveal the great potential of the intersection of technology and information; as well as how it can enhance the way we live,” said Kitamura.

According to a statement from Dr. Ishiguro’s Ishiguro Laboratory in the exhibit, the purpose of the robot discussion was to demonstrate their team’s work to “develop new generation information infrastructures in human-like robots that can naturally interact with humans.”

Robots are predicted to be integrated into every facet of human life by 2025. The race is on to make them easy for the average person to interact with.

There is also growing anxiety and concern about robots displacing much of human labor. It’s a subject matter that was covered at this year’s SXSW in the session, “Robots vs Jobs: Technological Displacement is Here.”

Watch the interaction between the two robots in the video below:

 



“iRobot” is Now a Very Real (And Scary) Thing – Watch!

iRobot

A showstopping exhibit at SXSW this year did not feature humans. Rather, it was of two humanoid robots having a discussion.

The conversation took place between “Skeleton,” an “exo-naked robot,” which has the same eerie look of the robots in the Will Smith movie iRobot, and a female humanoid robot named “android U.” The robots discussed the skills needed to make ramen versus sushi, in very natural human dialogue.

Their discussion was an example of an advanced discussion dialogue system that is the result of collaboration between two Japanese researchers. Dr. Ishiguro Hiroshi is a lauded artificial intelligence roboticist from Osaka University and Dr. Ryuichiro Higashinaka is a scientist who specializes in human communication and artificial intelligence. He is with NTT Communication Science Laboratories.

The robots are actually listening to one another and then making responses based on what the other stated. The discussion dialogue system uses NTT’s proprietary voice recognition technology.

The exhibit was part of the “Japan Factory” showcase that featured the most cutting-edge technology and science from the island nation. Last year, the Japanese tech floorshow welcomed over 4,000 visitors.

“In this age, robots can talk like humans, and virtual reality is able to turn fantasy into a reality,” said Kumiko Kitamura, executive producer of Japan Factory stated in a press release.

“The fusion of humans and robots, the real and virtual, technology and craftsmanship, and consciousness and the subconscious all happens synchronously. Echoing this trend, Japan Factory wants to reveal the great potential of the intersection of technology and information; as well as how it can enhance the way we live,” said Kitamura.

According to a statement from Dr. Ishiguro’s Ishiguro Laboratory in the exhibit, the purpose of the robot discussion was to demonstrate their team’s work to “develop new generation information infrastructures in human-like robots that can naturally interact with humans.”

Robots are predicted to be integrated into every facet of human life by 2025. The race is on to make them easy for the average person to interact with.

There is also growing anxiety and concern about robots displacing much of human labor. It’s a subject matter that was covered at this year’s SXSW in the session, “Robots vs Jobs: Technological Displacement is Here.”

Watch the interaction between the two robots in the video below:

 



When Chatbots Lack Diversity, This Is What Happens

chatbots

Now that chatbots have caught on, brands are trying to figure out how to create those that are appealing, intelligent, and don’t convey negative stereotypes.

This goes beyond the idea of male, female, or gender neutral, and jumps in with both feet to the question of diversity. There is a ‘slippery slope’ between the need to get their important message across, while addressing the requirements of a broad spectrum of people.

 

Chatbots Gone Wild

 

Many chatbots make use of a ‘learning process’ that involves human contributions through interaction as well as social media, but the chatbots aren’t always learning good habits. One of the worst examples of this was in the Microsoft launch of their smart chatbot called ‘Tay.’

Designed to appear as a millennial white teenage girl and communicate on Twitter, GroupMe, and Kik, the chatbot quickly learned all humans’ bad habits, and within 24 hours of launch, started spewing misogynistic, racist, and genocidal messages to those that ‘she’ came in contact with. Microsoft may have turned this chatbot off with apologies, but they did explain that Tay ‘learned’ the phrases from humans on the internet.

Another less inflammatory chatbot and more customer service oriented one was the TMY.GRL by Tommy Hilfiger. Launched as an assistant bot to help promote their Gigi Hadid collection, TMY.GRL is yet another white representation that works in conjunction with the e-commerce and shopping cart.

While most of the experience seemed to be pretty standard, many of the products were out of stock. It was then that TMY.GRL tried to be too sympathetic in her responses.  The ‘sympathy’ wasn’t at a believable level, combined with the out-of-stock and cumbersome checkout, made the entire experience bad. Chatbots don’t work quite as well for retail.

 

Chatbots That Got It Right

 

So, what elements are included in those chatbots that seem to have crossed the barriers and hit on the golden formula of getting it right?

A Venturebeat article narrows it down to these key ingredients:

  • Value-oriented concept (insight, usefulness, solving a unique problem)
  • Conversational UX (logic, content, overall experience)
  • Copywriting (personality, tone, manner)
  • Marketing (branding, promotion, discovery funnel)
  • Business model (monetization)
  • Results (number of users, value creation, engagement)

But, I’d like to add something else: the need for a broad spectrum of writers and strategies from diverse backgrounds, who can address cultural sensitivity issues in chatbots’ personalities, tones, and manner.  Without this, a simple chatbot conversation can turn disastrous.

One chatbot at the top of the list that hit all of the marks of excellence is the chatbot ‘Yeshi.’ Designed to increase awareness on the Ethiopian water crisis, Yeshi represents a young Ethiopian girl that must walk two and a half hours per day to get clean water. This is storytelling at its best, integrating an emotional experience with media sharing and geolocation to raise funds. The brand message is clear, and the user connects with Yeshi on a personal level.

A majority of the human-like chatbots are white, and there is a desperate need to have more chatbots of color, such as Yeshi. Without diversity, brands may lose out on billions of dollars in buying power, as people are demanding more personalization and real life interaction with brands.

The systems that are being created are a “work in progress,” and they are teaching the developers about the many different ways to ‘be human.’ It’s estimated that in 2016, over 35,000 chatbots were built around the globe, giving developers a chance to not only create, but expand to embrace our diversity.

Being ‘human’ involves interacting with people who look nothing like us and having the language, or at least learning how, to embrace those differences. Is that too much to ask from a chatbot? Brands may not have a choice.


maryann reid

Maryann Reid is the digital managing editor of BlackEnterprise.com and the author of several books published by St. Martin’s Press. For more, please follow her @RealAlphanista.

 

When Chatbots Lack Diversity, This Is What Happens

chatbots

Now that chatbots have caught on, brands are trying to figure out how to create those that are appealing, intelligent, and don’t convey negative stereotypes.

This goes beyond the idea of male, female, or gender neutral, and jumps in with both feet to the question of diversity. There is a ‘slippery slope’ between the need to get their important message across, while addressing the requirements of a broad spectrum of people.

 

Chatbots Gone Wild

 

Many chatbots make use of a ‘learning process’ that involves human contributions through interaction as well as social media, but the chatbots aren’t always learning good habits. One of the worst examples of this was in the Microsoft launch of their smart chatbot called ‘Tay.’

Designed to appear as a millennial white teenage girl and communicate on Twitter, GroupMe, and Kik, the chatbot quickly learned all humans’ bad habits, and within 24 hours of launch, started spewing misogynistic, racist, and genocidal messages to those that ‘she’ came in contact with. Microsoft may have turned this chatbot off with apologies, but they did explain that Tay ‘learned’ the phrases from humans on the internet.

Another less inflammatory chatbot and more customer service oriented one was the TMY.GRL by Tommy Hilfiger. Launched as an assistant bot to help promote their Gigi Hadid collection, TMY.GRL is yet another white representation that works in conjunction with the e-commerce and shopping cart.

While most of the experience seemed to be pretty standard, many of the products were out of stock. It was then that TMY.GRL tried to be too sympathetic in her responses.  The ‘sympathy’ wasn’t at a believable level, combined with the out-of-stock and cumbersome checkout, made the entire experience bad. Chatbots don’t work quite as well for retail.

 

Chatbots That Got It Right

 

So, what elements are included in those chatbots that seem to have crossed the barriers and hit on the golden formula of getting it right?

A Venturebeat article narrows it down to these key ingredients:

  • Value-oriented concept (insight, usefulness, solving a unique problem)
  • Conversational UX (logic, content, overall experience)
  • Copywriting (personality, tone, manner)
  • Marketing (branding, promotion, discovery funnel)
  • Business model (monetization)
  • Results (number of users, value creation, engagement)

But, I’d like to add something else: the need for a broad spectrum of writers and strategies from diverse backgrounds, who can address cultural sensitivity issues in chatbots’ personalities, tones, and manner.  Without this, a simple chatbot conversation can turn disastrous.

One chatbot at the top of the list that hit all of the marks of excellence is the chatbot ‘Yeshi.’ Designed to increase awareness on the Ethiopian water crisis, Yeshi represents a young Ethiopian girl that must walk two and a half hours per day to get clean water. This is storytelling at its best, integrating an emotional experience with media sharing and geolocation to raise funds. The brand message is clear, and the user connects with Yeshi on a personal level.

A majority of the human-like chatbots are white, and there is a desperate need to have more chatbots of color, such as Yeshi. Without diversity, brands may lose out on billions of dollars in buying power, as people are demanding more personalization and real life interaction with brands.

The systems that are being created are a “work in progress,” and they are teaching the developers about the many different ways to ‘be human.’ It’s estimated that in 2016, over 35,000 chatbots were built around the globe, giving developers a chance to not only create, but expand to embrace our diversity.

Being ‘human’ involves interacting with people who look nothing like us and having the language, or at least learning how, to embrace those differences. Is that too much to ask from a chatbot? Brands may not have a choice.


maryann reid

Maryann Reid is the digital managing editor of BlackEnterprise.com and the author of several books published by St. Martin’s Press. For more, please follow her @RealAlphanista.

 

I Took Toyota’s New Space Age Car for a Ride

Toyota

A big highlight at the CES 2017 technology show in Las Vegas this year was Toyota’s futuristic Concept-i vehicle.

Truly something out of science fiction, this is less of a vehicle and more of a mobile, robot friend. The car is powered by the latest in Toyota innovation, the artificial intelligence (AI) platform called “Yui.” You don’t just drive this car (or let it drive). You interact with it.

Yui is far more than software, it’s a personality. The AI acts as an interpreter between human and vehicle. With Yui, the Concept-i can self-drive, brake for you when manually driving, and can even tell you how you are feeling while you are driving.

On the Road With Yui

 

This is some seriously high-tech stuff. So, instead of just interviewing a bunch of engineers and marketing people, I was able to road test the Concept-i with Yui as my co-pilot, all through a sophisticated virtual reality demonstration setup by Toyota’s engineers.

The experience started with me creating a profile for Yui to get to know me better. I entered my name and my hobbies in an app.

When I entered the Concept-i, Yui greeted me and then asked me where I wanted to go. By asking, I mean an actual voice emanating from the dashboard.

Yui offered trip suggestions based on the hobbies and activities I liked to do that I had entered into the app. The suggestions appeared in front of me, hovering mid-air in the dashboard area.

I selected my preferred destination simply by talking to Yui. No tapping, touch, or mouse click required. Yui showed me via a map that appeared on the dashboard, the route we would take and how long it would take to reach the destination.

I began my journey in a virtual neighborhood, which was supposed to represent me taking the car from my home. As the trip began, Yui informed me that I had full manual control of the vehicle.

 

AI as Guardian

 

As Toyota representatives told me, the goal is not to relinquish control of driving totally to a machine. Rather, it’s to allow humans the ability to manually drive when they want, say, on a nice day along a scenic coastal highway; and to hand-off driving to the car and software, say, when you are tired, or drunk.

I drove the Concept-i through my idyllic, little neighborhood. Whenever anything dangerous crossed my path, human or another vehicle, Yui, alerted me with a vocal warning and also went into “guardian” mode to handle braking if I was too slow to react.

Once we reached the highway, Yui took over and started driving. Ambient music played, and Yui’s sensors honed in to detect my level of relaxation. My driver’s seat automatically reclined and a massage pad on the seat began kneading my lower back.

Suddenly, a biker appeared from a side trail in front of the car. Yui deftly avoided an accident, braking just enough to avoid the bicyclist but not enough to lurch me forward. Had I been driving in the actual scenario, the outcome would likely have been worse.

What a way to drive! After I reached my destination, Yui informed me of what my emotions were during the course of the trip. Unsurprisingly, I was relaxed and happy for most of the journey.

The Concept-i is a beautiful vehicle. Winged doors open up automatically instead of outward, almost as with a Delorean, but not at as much of an angle. Inside, is a widescreen, 3-D, full-color display that lets you keep your head up looking to the road instead of down at a screen.

Watch the video of my experience riding in Toyota’s Concept-i.

 



Tech Forecast: Social Artificial Intelligence Increases Entertainment Brands Profit Margins

Tech Forecast

Welcome to 2017!!! This year we will experience a lot of new things—and no, I am not talking about Donald Trump as president.

The possibilities on the horizon are:

  1. A potential Snap Inc. initial public offering (IPO)
  2. Taraji P. Henson winning an Oscar for Best Actress as Katherine G. JohnsonNASA Mathematician—in Hidden Figures
  3. Apple iPhone 7s and iPhone 7s Plus—well, we know that is coming

Yet, one of the things I forecast in 2017 is #techies having the ability to experience entertainment brands using predictive analysis based on geolocation to enhance experiences that leverage artificial intelligence (AI) powered by social analytical data based on customer interactions.

“With demographic shifts moving culture and ‘cool’ in different directions, Social AI allows for curated and personalized experiences using historical and real-time data. This yields better experiences for customers, which leads to greater brand loyalty, customer retention, and willingness to pay a premium for this type of individualized experience.”

bari_williams

Bärí A. Williams, Esq., Head of Business Operations, North America at StubHub

Recently, I met with the new Head of Business Operations, North America at StubHub— Bärí A. Williams, Esq.— and we discussed in depth how predictive analysis will play a major role in how a new level of intelligent and enriched customer experience will allow for a high engagement of return on investment (ROI) for the amount of time and manpower it takes to develop these experiences.

Check out my one-on-one conversation below with Williams and stay tuned for all the exciting #techie marvels of 2017.

Nathaniel J. – Artificial Intelligence. It’s no longer just a “buzz word” in Silicon Valley, now it is a word you hear being used by traditional and startup organizations. Does this new predictive analysis for the entertainment industry fall within the “umbrella” of AI?

Bärí – No, this does not fall under the “umbrella” of Ai.  The term I have developed with my team at StubHub is Social Ai.

Nathaniel J. – What is Social AI?

Bärí – Social AI is the ability to harness and use customer data and artificial intelligence (via social media integration and purchase history) to cultivate and offer better social experiences for an individual.

Nathaniel J. – But isn’t this similar to when I receive an offer in an email, text message, or notification via my mobile device? What is the significant difference in Social AI?

Bärí – Great question! Social AI uses customer/user generated content and data via social media site integration (what do you like on Twitter, IG, and Facebook), customer history (what have you paid for before), sifts through all of that information, and will introduce you to experiences, events, and associated interests to broaden horizons and create more meaningful experiences.

Nathaniel J. – What are the possibilities of what Social AI can yield?

Bärí – The goal for Social AI is to yield a better experience for customers by exposing them to new bands, events, and encounters they wouldn’t have known about. This also makes a consumer’s desired experience easier to achieve, through ticketing, travel, and transportation, if necessary.

 

Tech Forecast: Social Artificial Intelligence Increases Entertainment Brands Profit Margins

Tech Forecast

Welcome to 2017!!! This year we will experience a lot of new things—and no, I am not talking about Donald Trump as president.

The possibilities on the horizon are:

  1. A potential Snap Inc. initial public offering (IPO)
  2. Taraji P. Henson winning an Oscar for Best Actress as Katherine G. JohnsonNASA Mathematician—in Hidden Figures
  3. Apple iPhone 7s and iPhone 7s Plus—well, we know that is coming

Yet, one of the things I forecast in 2017 is #techies having the ability to experience entertainment brands using predictive analysis based on geolocation to enhance experiences that leverage artificial intelligence (AI) powered by social analytical data based on customer interactions.

“With demographic shifts moving culture and ‘cool’ in different directions, Social AI allows for curated and personalized experiences using historical and real-time data. This yields better experiences for customers, which leads to greater brand loyalty, customer retention, and willingness to pay a premium for this type of individualized experience.”

bari_williams

Bärí A. Williams, Esq., Head of Business Operations, North America at StubHub

Recently, I met with the new Head of Business Operations, North America at StubHub— Bärí A. Williams, Esq.— and we discussed in depth how predictive analysis will play a major role in how a new level of intelligent and enriched customer experience will allow for a high engagement of return on investment (ROI) for the amount of time and manpower it takes to develop these experiences.

Check out my one-on-one conversation below with Williams and stay tuned for all the exciting #techie marvels of 2017.

Nathaniel J. – Artificial Intelligence. It’s no longer just a “buzz word” in Silicon Valley, now it is a word you hear being used by traditional and startup organizations. Does this new predictive analysis for the entertainment industry fall within the “umbrella” of AI?

Bärí – No, this does not fall under the “umbrella” of Ai.  The term I have developed with my team at StubHub is Social Ai.

Nathaniel J. – What is Social AI?

Bärí – Social AI is the ability to harness and use customer data and artificial intelligence (via social media integration and purchase history) to cultivate and offer better social experiences for an individual.

Nathaniel J. – But isn’t this similar to when I receive an offer in an email, text message, or notification via my mobile device? What is the significant difference in Social AI?

Bärí – Great question! Social AI uses customer/user generated content and data via social media site integration (what do you like on Twitter, IG, and Facebook), customer history (what have you paid for before), sifts through all of that information, and will introduce you to experiences, events, and associated interests to broaden horizons and create more meaningful experiences.

Nathaniel J. – What are the possibilities of what Social AI can yield?

Bärí – The goal for Social AI is to yield a better experience for customers by exposing them to new bands, events, and encounters they wouldn’t have known about. This also makes a consumer’s desired experience easier to achieve, through ticketing, travel, and transportation, if necessary.

 

This Week in Tech Racism: Dec. 30, 2016

Tech Racism

This week in tech racism: An actor’s Instagram photo sparks outrage and discussion about colorism in the African American community; ride-sharing services Uber and Lyft answer questions about discrimination concerns; how machines learn prejudice; plus more news from the cross section of technology, science, and racism.

Click the titles of each for the links to the full, sourced article.

Lance Gross Sparks Light Skin Supremacy Backlash

When actor Lance Gross posted a photo capturing the festive holiday spirit, he probably didn’t foresee the storm he was about to brew. Gross’s photo is of him with several other black men, all embracing light-skinned women. Off to the side of the couples, is a darker-skinned woman, seemingly sad and alone—outcast from the others. Social media erupted accusing Gross of advancing colorism stereotypes.

Twitter Troll Milo Yiannopoulos Lands $250,000 Book Deal 

Breitbart news editor and professional social media troll Milo Yiannopoulos landed a quarter of a million dollar book deal with Simon and Schuster. The author, known for such thought-provoking, insightful posts including, “Here Are My 2017 New Year’s Resolutions for MTV,” “Hollywood Leftists, Stop Being Racist and Move to Cuba,” and “Trannies Are Gay,” is perhaps best known for being banished from Twitter after his racist and unhinged harassment of actress Leslie Jones after she dared accept a role in the reboot of Ghostbusters.

Bristol Palin Blogs; Calls Stars Refusing to Perform at Trump Inauguration “Sissies” 

Why should Trump get the entire spotlight? It’s high time we heard from a Palin. Bristol, Sarah Palin’s daughter, took to her blog to sound off on A-list celebrities who refused to perform at the President-elect’s inauguration. “Isn’t it amazing how “not cool” it is to be conservative in the public eye? Either Hollywood is that far off—or we have so many sissies we have in the spotlight too scared to stand for what they believe in!” Ms. Palin opined.

Uber and Lyft Address Discrimination Concerns, Explaining Their Practices: Here’s the Deal

Uber and Lyft recently addressed demands for a response to the discriminatory practices some of its drivers exhibit to people of color. Both companies responded to suggestions that drivers not see users’ photos to help avoid cherry picking which fares to pick up. Uber confirmed that “before accepting a ride request, a driver only sees the customer’s star rating, current location, the type of service they want, and whether dynamic pricing applies.” Lyft stated “its drivers get the name of their riders and the riders get the name of their driver as part of a “digital trust profile” aiming to ensure that there is no confusion.”

How a Machine Learns Prejudice 

A fascinating piece in Scientific American looks at how artificial intelligence-based computers learn prejudice. The conclusion? “Artificial intelligence picks up bias from human creators—not from hard, cold logic.” This article is in wake of concerns about search algorithms giving higher ranking results to insulting and derogatory terms and images to people of color as well as other controversies surrounding AI and racism.