Edge interview with Kai-Fu Lee

The Edge interview with Kai-Fu Lee is very good. He is one of the original A.I. researchers and has worked in the industry for most of the big name technology companies.

He discusses the history of A.I., the current situation involving Deep Learning, and goes on to talk about the future.”We’re all going to face a very challenging next fifteen or twenty years, when half of the jobs are going to be replaced by machines. Humans have never seen this scale of massive job decimation.”

He talks about areas that will see a lot of growth in the immediate future. Micro-payments, the Internet of Things, social networks delivering profiles that will trade privacy for convenience.

He talks about the Haves and the Have Nots: “The people who are inventing these AI algorithms, building AI companies, they will become the haves. The people whose jobs are replaced will be the have nots. And the gap between them, whether it’s in wealth or power, will be dramatic, and will be perhaps the largest that mankind has ever experienced.”

He also talks about a growing inequality amoung countries: “Lastly, and perhaps most difficult to solve, is the gap between countries. The countries that have AI technology will be much better off. They’ll be creating and extracting value. The countries that have large populations of users whose data is gathered and iterated through the AI algorithm, they’ll be in good shape.”

Deep Learning

The more I use Deep Learning, the more I am amazed by it. Some things which would be hard to do programmatically are easy with the right Neural Network. It feels like we are just starting to scratch the possibilities.

Computation meets Data Science Conference

Today I was at a Computation meets Data Science Conference, organised by Wolfram Research and the CQF. There were some interesting talks. The ones I enjoyed the most used Mathematica to analyse data in real time in interesting ways. It looks like Mathematica has good support for building neural networks now. I was impressed at how quickly Jon Macloone from Wolfram was able to get some quite useful neural network models up and running. Jon made the point that for some problems you are able to get results really quickly with neural nets, and others it’s really hard to get good results, and it’s not obvious which problems are which.

More Experiments with Neural Style Transfer

I’ve been playing more with Neural-Style Transfer. It’s fun to play with, but with the code I’m using, I’m struggling to get results I can use. I was trying to merge the style of a Platon photo with a photo taken of me (when I was growing a beard). It was weird what information the neural net decided to take from the style photo.