The Edge interview with Kai-Fu Lee is very good. He is one of the original A.I. researchers and has worked in the industry for most of the big name technology companies.
He discusses the history of A.I., the current situation involving Deep Learning, and goes on to talk about the future.”We’re all going to face a very challenging next fifteen or twenty years, when half of the jobs are going to be replaced by machines. Humans have never seen this scale of massive job decimation.”
He talks about areas that will see a lot of growth in the immediate future. Micro-payments, the Internet of Things, social networks delivering profiles that will trade privacy for convenience.
He talks about the Haves and the Have Nots: “The people who are inventing these AI algorithms, building AI companies, they will become the haves. The people whose jobs are replaced will be the have nots. And the gap between them, whether it’s in wealth or power, will be dramatic, and will be perhaps the largest that mankind has ever experienced.”
He also talks about a growing inequality amoung countries: “Lastly, and perhaps most difficult to solve, is the gap between countries. The countries that have AI technology will be much better off. They’ll be creating and extracting value. The countries that have large populations of users whose data is gathered and iterated through the AI algorithm, they’ll be in good shape.”
We need to deal with so much information from all sorts of media these days, that reputation is becoming a larger and larger factor in our society.
In the past the path to publication was much harder, so something that was published acquired a reputation simply by virtue of the publication process. These days however, it’s cheap and easy to publish. Fake news sites are ones that have the trappings of a real news site, but initially attract people by appealing to biases. They trade off the gain in reputation simply by appearing like a reputable site and having a plausable domain name.
We often rely on “reputation chains” to validate information. We believe a study because scientists have reviewed the study as part of the peer review process. The study has gained from the reputation of the journal, and to the more knowledgable – from the reputation of the reviewing scientists. Unfortunately sometimes people with good reputations can spread misinformation, so we still need to be critical as to the veracity of the information we receive. Our cognative biases can cause us to reject true information, so we need to be caution when rejecting information from a reputable source.
We also have more reputation transmission mechanisms these days. We have accreditations, charter groups, and social networking sites for signalling reputation. We have awards and prizes for boosting reputation. Reputation is an increasingly bankable attribute these days.
I just tried the Deep Music skill on Alexa. It generates AI music – which sounds pretty much as you’d expect. It’s a bit repetitive, but not too bad. This is an area of Machine Learning that will get a lot better in the near future. So, my AI voice assistant can now play AI generated music at me!
There is a great graph-filled post over at Slate Star Codex called Technological Unemployment – much more than you wanted to know. After analysing a lot of data from the US economy, the author arrives at some tentative conclusions: The main point seems to be that the evidence for large-scale technological unemployment is mixed. There is evidence of technological underemployment however. There are signs that people are now struggling to adjust. The final paragraph is:
“This is a very depressing conclusion. If technology didn’t cause problems, that would be great. If technology made lots of people unemployed, that would be hard to miss, and the government might eventually be willing to subsidize something like a universal basic income. But we won’t get that. We’ll just get people being pushed into worse and worse jobs, in a way that does not inspire widespread sympathy or collective action. The prospect of educational, social, or political intervention remains murky.”
In Charlie Stross’ Keynote at the CCC, he talks about corporations being like Slow A.I. organisms. They pursue their instrumental goals, they exist apart from people yet are legal entities in their own right, and they can have lifespans way longer than human lifespans.
Continue reading “Slow A.I.”
There is a great post over at Charlie Stross’ Blog that gives the text of his keynote at the 34th Chaos Communication Congress in Leipzig, December 2017. He makes some interesting points about old, slow AI – i.e. corporations, and compares them to cannibalistic organisms that shed people like cells. He talks about the ways the standard limiter of regulation are failing (regulatory capture and regulatory lag). He ends with a fairly negative assessment of where we are heading. It’s a thought-provoking talk, and well worth reading / watching.
Continue reading “Dude, you broke the future!”
As Helen and I get older, I think that the way we work will have to change. At some point we will probably find it difficult to get contacts because of ageism, and also we will be too expensive in comparison with graduates with a few years experience. We will be forced to work entirely on our own projects. This is going to mean a few changes to the way we think. Both Helen and I have been ingrained with a strong work ethic, which struggles when we work on our own more nebulous projects. We both find it hard to stick with projects that don’t have a certain income stream. In the future we will need to change both the way we work, and the way we think about our work.