Playing God: Why artificial intelligence is hopelessly biased - and always will be

Much has been said about the potential of artificial intelligence (AI) to transform many aspects of business and society for the better. In...

Much has been said about the potential of artificial intelligence (AI) to transform many aspects of business and society for the better. In the opposite corner, science fiction has the doomsday narrative covered handily.

To ensure AI products function as their developers intend - and to avoid a HAL9000 or Skynet-style scenario - the common narrative suggests that data used as part of the machine learning (ML) process must be carefully curated, to minimise the chances the product inherits harmful attributes.

According to Richard Tomsett, AI Researcher at IBM Research Europe, “our AI systems are only as good as the data we put into them. As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring we’re developing and training these systems with data that is fair, interpretable and unbiased is critical.”

Left unchecked, the influence of undetected bias could also expand rapidly as appetite for AI products accelerates, especially if the means of auditing underlying data sets remain inconsistent and unregulated.

However, while the issues that could arise from biased AI decision making - such as prejudicial recruitment or unjust incarceration - are clear, the problem itself is far from black and white. 

Questions surrounding AI bias are impossible to disentangle from complex and wide-ranging issues such as the right to data privacy, gender and race politics, historical tradition and human nature - all of which must be unraveled and brought into consideration.

Meanwhile, questions over who is responsible for establishing the definition of bias and who is tasked with policing that standard (and then policing the police) serve to further muddy the waters.

The scale and complexity of the problem more than justifies doubts over the viability of the quest to cleanse AI of partiality, however noble it may be.

What is algorithmic bias?

Algorithmic bias can be described as any instance in which discriminatory decisions are reached by an AI model that aspires to impartiality. Its causes lie primarily in prejudices (however minor) found within the vast data sets used to train machine learning (ML) models, which act as the fuel for decision making. 

Biases underpinning AI decision making could have real-life consequences for both businesses and individuals, ranging from the trivial to the hugely significant.

For example, a model responsible for predicting demand for a particular product, but fed data relating to only a single demographic, could plausibly generate decisions that lead to the loss of vast sums in potential revenue.

Equally, from a human perspective, a program tasked with assessing requests for parole or generating quotes for life insurance plans could cause significant damage if skewed by an inherited prejudice against a certain minority group.

According to Jack Vernon, Senior Research Analyst at IDC, the discovery of bias within an AI product can, in some circumstances, render it completely unfit for purpose.

“Issues arise when algorithms derive biases that are problematic or unintentional. There are two usual sources of unwanted biases: data and the algorithm itself,” he told TechRadar Pro via email.

“Data issues are self-explanatory enough, in that if features of a data set used to train an algorithm have problematic underlying trends, there's a strong chance the algorithm will pick up and reinforce these trends.”

“Algorithms can also develop their own unwanted biases by mistake...Famously, an algorithm for identifying polar bears and brown bears had to be discarded after it was discovered the algorithm based its classification on whether there was snow on the ground or not, and didn't focus on the bear's features at all.”

Vernon’s example illustrates the eccentric ways in which an algorithm can diverge from its intended purpose - and it’s this semi-autonomy that can pose a threat, if a problem goes undiagnosed.

The greatest issue with algorithmic bias is its tendency to compound already entrenched disadvantages. In other words, bias in an AI product is unlikely to result in a white-collar banker having their credit card application rejected erroneously, but may play a role in a member of another demographic (which has historically had a greater proportion of applications rejected) suffering the same indignity.

The question of fair representation

The consensus among the experts we consulted for this piece is that, in order to create the least prejudiced AI possible, a team made up of the most diverse group of individuals should take part in its creation, using data from the deepest and most varied range of sources.

The technology sector, however, has a long-standing and well-documented issue with diversity where both gender and race are concerned.

In the UK, only 22% of directors at technology firms are women - a proportion that has remained practically unchanged for the last two decades. Meanwhile, only 19% of the overall technology workforce are female, far from the 49% that would accurately represent the ratio of female to male workers in the UK.

Among big tech, meanwhile, the representation of minority groups has also seen little progress. Google and Microsoft are industry behemoths in the context of AI development, but the percentage of black and Latin American employees at both firms remains miniscule.

According to figures from 2019, only 3% of Google’s 100,000+ employees were Latin American and 2% were black - both figures up by only 1% over 2014. Microsoft’s record is only marginally better, with 5% of its workforce made up of Latin Americans and 3% black employees in 2018.

The adoption of AI in enterprise, on the other hand, skyrocketed during a similar period according to analyst firm Gartner, increasing by 270% between 2015-2019. The clamour for AI products, then, could be said to be far greater than the commitment to ensuring their quality.

Patrick Smith, CTO at data storage firm PureStorage, believes businesses owe it not just to those that could be affected by bias to address the diversity issue, but also to themselves.

“Organisations across the board are at risk of holding themselves back from innovation if they only recruit in their own image. Building a diversified recruitment strategy, and thus a diversified employee base, is essential for AI because it allows organisations to have a greater chance of identifying blind spots that you wouldn’t be able to see if you had a homogenous workforce,” he said.

“So diversity and the health of an organisation relates specifically to diversity within AI, as it allows them to address unconscious biases that otherwise could go unnoticed.”

Further, questions over precisely how diversity is measured add another layer of complexity. Should a diverse data set afford each race and gender equal representation, or should representation of minorities in a global data set reflect the proportions of each found in the world population?

In other words, should data sets feeding globally applicable models contain information relating to an equal number of Africans, Asians, Americans and Europeans, or should they represent greater numbers of Asians than any other group?

The same question can be raised with gender, because the world contains 105 men for every 100 women at birth.

The challenge facing those whose goal it is to develop AI that is sufficiently impartial (or perhaps proportionally impartial) is the challenge facing societies across the globe. How can we ensure all parties are not only represented, but heard - and when historical precedent is working all the while to undermine the endeavor?

Is data inherently prejudiced?

The importance of feeding the right data into ML systems is clear, correlating directly with AI’s ability to generate useful insights. But identifying the right versus wrong data (or good versus bad) is far from simple.

As Tomsett explains, “data can be biased in a variety of ways: the data collection process could result in badly sampled, unrepresentative data; labels applied to the data through past decisions or human labellers may be biased; or inherent structural biases that we do not want to propagate may be present in the data.”

“Many AI systems will continue to be trained using bad data, making this an ongoing problem that can result in groups being put at a systemic disadvantage,” he added.

It would be logical to assume that removing data types that could possibly inform prejudices - such as age, ethnicity or sexual orientation - might go some way to solving the problem. However, auxiliary or adjacent information held within a data set can also serve to skew output.

An individual’s postcode, for example, might reveal much about their characteristics or identity. This auxiliary data could be used by the AI product as a proxy for the primary data, resulting in the same level of discrimination.

Further complicating matters, there are instances in which bias in an AI product is actively desirable. For example, if using AI to recruit for a role that demands a certain level of physical strength - such as firefighter - it is sensible to discriminate in favor of male applicants, because biology dictates the average male is physically stronger than the average female. In this instance, the data set feeding the AI product is indisputably biased, but appropriately so.

This level of depth and complexity makes auditing for bias, identifying its source and grading data sets a monumentally challenging task.

To tackle the issue of bad data, researchers have toyed with the idea of bias bounties, similar in style to bug bounties used by cybersecurity vendors to weed out imperfections in their services. However, this model operates on the assumption an individual is equipped to to recognize bias against any other demographic than their own - a question worthy of a whole separate debate.

Another compromise could be found in the notion of Explainable AI (XAI), which dictates that developers of AI algorithms must be able to explain in granular detail the process that leads to any given decision generated by their AI model.

“Explainable AI is fast becoming one of the most important topics in the AI space, and part of its focus is on auditing data before it’s used to train models,” explained Vernon.

“The capability of AI explainability tools can help us understand how algorithms have come to a particular decision, which should give us an indication of whether biases the algorithm is following are problematic or not.”

Transparency, it seems, could be the first step on the road to addressing the issue of unwanted bias. If we’re unable to prevent AI from discriminating, the hope is we can at least recognise discrimination has taken place.

Are we too late?

The perpetuation of existing algorithmic bias is another problem that bears thinking about. How many tools currently in circulation are fueled by significant but undetected bias? And how many of these programs might be used as the foundation for future projects?

When developing a piece of software, it’s common practice for developers to draw from a library of existing code, which saves time and allows them to embed pre-prepared functionalities into their applications.

The problem, in the context of AI bias, is that the practice could serve to extend the influence of bias, hiding away in the nooks and crannies of vast code libraries and data sets.

Hypothetically, if a particularly popular piece of open source code were to exhibit bias against a particular demographic, it’s possible the same discriminatory inclination could embed itself at the heart of many other products, unbeknownst to their developers.

According to Kacper Bazyliński, AI Team Leader at software development firm Neoteric, it is relatively common for code to be reused across multiple development projects, depending on their nature and scope.

“If two AI projects are similar, they often share some common steps, at least in data pre- and post-processing. Then it’s pretty common to transplant code from one project to another to speed up the development process,” he said.

“Sharing highly biased open source data sets for ML training makes it possible that the bias finds its way into future products. It’s a task for the AI development teams to prevent from happening.”

Further, BazyliÅ„ski notes that it’s not uncommon for developers to have limited visibility into the kinds of data going into their products.

“In some projects, developers have full visibility over the data set, but it’s quite often that some data has to be anonymized or some features stored in data are not described because of confidentiality,” he noted.

This isn’t to say code libraries are inherently bad - they are no doubt a boon for the world’s developers - but their potential to contribute to the perpetuation of bias is clear.

“Against this backdrop, it would be a serious mistake to...conclude that technology itself is neutral,” reads a blog post from Google-owned AI firm DeepMind.

“Even when bias does not originate with software developers, it is still repackaged and amplified by the creation of new products, leading to new opportunities for harm.”

Bias might be here to stay

‘Bias’ is an inherently loaded term, carrying with it a host of negative baggage. But it is possible bias is more fundamental to the way we operate than we might like to think - inextricable from the human character and therefore anything we produce.

According to Alexander Linder, VP Analyst at Gartner, the pursuit of impartial AI is misguided and impractical, by virtue of this very human paradox.

“Bias cannot ever be totally removed. Even the attempt to remove bias creates bias of its own - it’s a myth to even try to achieve a bias-free world,” he told TechRadar Pro.

Tomsett, meanwhile, strikes a slightly more optimistic note, but also gestures towards the futility of an aspiration to total impartiality.

“Because there are different kinds of bias and it is impossible to minimize all kinds simultaneously, this will always be a trade-off. The best approach will have to be decided on a case by case basis, by carefully considering the potential harms from using the algorithm to make decisions,” he explained.

“Machine learning, by nature, is a form of statistical discrimination: we train machine learning models to make decisions (to discriminate between options) based on past data.”

The attempt to rid decision making of bias, then, runs at odds with the very mechanism humans use to make decisions in the first place. Without a measure of bias, AI cannot be mobilised to work for us.

It would be patently absurd to suggest AI bias is not a problem worth paying attention to, given the obvious ramifications. But, on the other hand, the notion of a perfectly balanced data set, capable of rinsing all discrimination from algorithmic decision-making, seems little more than an abstract ideal.

Life, ultimately, is too messy. Perfectly egalitarian AI is unachievable, not because it’s a problem that requires too much effort to solve, but because the very definition of the problem is in constant flux.

The conception of bias varies in line with changes to societal, individual and cultural preference - and it is impossible to develop AI systems within a vacuum, at a remove from these complexities.

To be able to recognize biased decision making and mitigate its damaging effects is critical, but to eliminate bias is unnatural - and impossible.



from TechRadar - All the latest technology news https://ift.tt/36naGRf
via IFTTT

COMMENTS

BLOGGER
Name

Apps,3858,Business,151,Camera,1155,Earn $$$,3,Gadgets,1741,Games,926,GTA,1,Innovations,3,Mobile,1697,Paid Promotions,5,Promotions,5,Sports,1,Technology,8106,Trailers,796,Travel,37,Trending,4,Trendly News,25335,TrendlyNews,183,Video,5,XIAOMI,13,YouTube - 9to5Google,182,
ltr
item
Trendly News | #ListenNow #Everyday #100ShortNews #TopTrendings #PopularNews #Reviews #TrendlyNews: Playing God: Why artificial intelligence is hopelessly biased - and always will be
Playing God: Why artificial intelligence is hopelessly biased - and always will be
http://cdn.mos.cms.futurecdn.net/CMiifeoaPiwDD5NBNkXnuE.jpg
Trendly News | #ListenNow #Everyday #100ShortNews #TopTrendings #PopularNews #Reviews #TrendlyNews
http://www.trendlynews.in/2020/05/playing-god-why-artificial-intelligence.html
http://www.trendlynews.in/
http://www.trendlynews.in/
http://www.trendlynews.in/2020/05/playing-god-why-artificial-intelligence.html
true
3372890392287038985
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share. STEP 2: Click the link you shared to unlock Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy