The Use and Abuse of
A Virtual Intelligence Session
Thursday, March 25, 2020
2:00-4:00pm PT • 5:00-7:00pm ET
Machine Learning systems are celebrated for their superhuman capabilities to play games like chess and Go and for their ability to recognize patterns – like faces – in data. They are also able to generate what seems to be human-style text.
In fact, these systems are correlation engines, identifying statistical relationships in datasets and predicting outcomes based on those observed correlations, no more and no less. Even the sentences they produce are merely a function of consuming massive quantities of text and noting what words appear closer to one another.
As datasets grow larger, the potential to identify “false” correlations – with no causal significance – grows exponentially.
The potential for abuse is all the greater when the data to be analyzed has been generated from systemically biased behavior: then ML becomes “money laundering for bias,” as in systems used by parole boards or in sentencing decisions.
And as powerful as these systems are when playing games whose rules are set in advance once and for all, they fail utterly when it comes to the most important games we play: every time we try to understand what one another means by the words we use in order to negotiate the rules of our engagement.
Extracting information from data is useful.
Extracting meaning from information is so much harder; it is why we still need humans in the loop.
Register for FiReSide