The Edge interview with Kai-Fu Lee is very good. He is one of the original A.I. researchers and has worked in the industry for most of the big name technology companies.
He discusses the history of A.I., the current situation involving Deep Learning, and goes on to talk about the future.”We’re all going to face a very challenging next fifteen or twenty years, when half of the jobs are going to be replaced by machines. Humans have never seen this scale of massive job decimation.”
He talks about areas that will see a lot of growth in the immediate future. Micro-payments, the Internet of Things, social networks delivering profiles that will trade privacy for convenience.
He talks about the Haves and the Have Nots: “The people who are inventing these AI algorithms, building AI companies, they will become the haves. The people whose jobs are replaced will be the have nots. And the gap between them, whether it’s in wealth or power, will be dramatic, and will be perhaps the largest that mankind has ever experienced.”
He also talks about a growing inequality amoung countries: “Lastly, and perhaps most difficult to solve, is the gap between countries. The countries that have AI technology will be much better off. They’ll be creating and extracting value. The countries that have large populations of users whose data is gathered and iterated through the AI algorithm, they’ll be in good shape.”
We need to deal with so much information from all sorts of media these days, that reputation is becoming a larger and larger factor in our society.
In the past the path to publication was much harder, so something that was published acquired a reputation simply by virtue of the publication process. These days however, it’s cheap and easy to publish. Fake news sites are ones that have the trappings of a real news site, but initially attract people by appealing to biases. They trade off the gain in reputation simply by appearing like a reputable site and having a plausable domain name.
We often rely on “reputation chains” to validate information. We believe a study because scientists have reviewed the study as part of the peer review process. The study has gained from the reputation of the journal, and to the more knowledgable – from the reputation of the reviewing scientists. Unfortunately sometimes people with good reputations can spread misinformation, so we still need to be critical as to the veracity of the information we receive. Our cognative biases can cause us to reject true information, so we need to be caution when rejecting information from a reputable source.
We also have more reputation transmission mechanisms these days. We have accreditations, charter groups, and social networking sites for signalling reputation. We have awards and prizes for boosting reputation. Reputation is an increasingly bankable attribute these days.
I just tried the Deep Music skill on Alexa. It generates AI music – which sounds pretty much as you’d expect. It’s a bit repetitive, but not too bad. This is an area of Machine Learning that will get a lot better in the near future. So, my AI voice assistant can now play AI generated music at me!
I just started experimenting with Image-Style Transfer. I’ve been excited about it for a long time, but reading this code on Nvidia’s latest paper prompted me to start playing with it in earnest. Of course, in the Coursera Deep Learning courses we studied this as well. As I don’t have an Nvidia card installed on my notebook, I started off with this Torch Implementation.
There is a great graph-filled post over at Slate Star Codex called Technological Unemployment – much more than you wanted to know. After analysing a lot of data from the US economy, the author arrives at some tentative conclusions: The main point seems to be that the evidence for large-scale technological unemployment is mixed. There is evidence of technological underemployment however. There are signs that people are now struggling to adjust. The final paragraph is:
“This is a very depressing conclusion. If technology didn’t cause problems, that would be great. If technology made lots of people unemployed, that would be hard to miss, and the government might eventually be willing to subsidize something like a universal basic income. But we won’t get that. We’ll just get people being pushed into worse and worse jobs, in a way that does not inspire widespread sympathy or collective action. The prospect of educational, social, or political intervention remains murky.”
The World-champion chess player Gary Kasparov concluded that “Weak human + machine + better process was superior to a strong computer alone and, more remarkable, superior to a strong human + machine + inferior process.” He created “Freestyle Chess”, pairing a computer and a human to make a stronger chess player. This hybrid combination is sometimes called a “Centaur”.
Continue reading “Centaur Living”
In Charlie Stross’ Keynote at the CCC, he talks about corporations being like Slow A.I. organisms. They pursue their instrumental goals, they exist apart from people yet are legal entities in their own right, and they can have lifespans way longer than human lifespans.
Continue reading “Slow A.I.”
There is a great post over at Charlie Stross’ Blog that gives the text of his keynote at the 34th Chaos Communication Congress in Leipzig, December 2017. He makes some interesting points about old, slow AI – i.e. corporations, and compares them to cannibalistic organisms that shed people like cells. He talks about the ways the standard limiter of regulation are failing (regulatory capture and regulatory lag). He ends with a fairly negative assessment of where we are heading. It’s a thought-provoking talk, and well worth reading / watching.
Continue reading “Dude, you broke the future!”
Facebook has recently got into trouble over an accusation that they are suppressing conservative news stories in the trending news categories. Facebook have an algorithmic system that promotes trending topics to a human curation team, who make the final decision about what gets promoted. Obviously human beings have bias. One of the interesting things that has happened in finance is that banks are using algos more and more to ensure that humans aren’t involved in situations where there can be a conflict of interest. One example is the 4pm FX fix which are now required to be handled algorithmically. There’s a trend here – algorithms are being used to ensure fairness. Will media companies be forced to have algorithmic editors to remove bias from reporting?