The Edge interview with Kai-Fu Lee is very good. He is one of the original A.I. researchers and has worked in the industry for most of the big name technology companies.
He discusses the history of A.I., the current situation involving Deep Learning, and goes on to talk about the future.”We’re all going to face a very challenging next fifteen or twenty years, when half of the jobs are going to be replaced by machines. Humans have never seen this scale of massive job decimation.”
He talks about areas that will see a lot of growth in the immediate future. Micro-payments, the Internet of Things, social networks delivering profiles that will trade privacy for convenience.
He talks about the Haves and the Have Nots: “The people who are inventing these AI algorithms, building AI companies, they will become the haves. The people whose jobs are replaced will be the have nots. And the gap between them, whether it’s in wealth or power, will be dramatic, and will be perhaps the largest that mankind has ever experienced.”
He also talks about a growing inequality amoung countries: “Lastly, and perhaps most difficult to solve, is the gap between countries. The countries that have AI technology will be much better off. They’ll be creating and extracting value. The countries that have large populations of users whose data is gathered and iterated through the AI algorithm, they’ll be in good shape.”
Last night we went to see the movie “Isle of Dogs” at the Duke of York’s PictureHouse cinema in Brighton. Apparently the Duke of York is one of the oldest continuously run movie theatres in the World! The movie was fantastic! Both Helen and I thoroughly enjoyed it. Definitely a must see.
We need to deal with so much information from all sorts of media these days, that reputation is becoming a larger and larger factor in our society.
In the past the path to publication was much harder, so something that was published acquired a reputation simply by virtue of the publication process. These days however, it’s cheap and easy to publish. Fake news sites are ones that have the trappings of a real news site, but initially attract people by appealing to biases. They trade off the gain in reputation simply by appearing like a reputable site and having a plausable domain name.
We often rely on “reputation chains” to validate information. We believe a study because scientists have reviewed the study as part of the peer review process. The study has gained from the reputation of the journal, and to the more knowledgable – from the reputation of the reviewing scientists. Unfortunately sometimes people with good reputations can spread misinformation, so we still need to be critical as to the veracity of the information we receive. Our cognative biases can cause us to reject true information, so we need to be caution when rejecting information from a reputable source.
We also have more reputation transmission mechanisms these days. We have accreditations, charter groups, and social networking sites for signalling reputation. We have awards and prizes for boosting reputation. Reputation is an increasingly bankable attribute these days.
There is a blog post here about How to Fall Asleep Fast. The technique comes from the Second World War and making WW2 pilots get enough sleep. A person called Bud Winter was tasked with training pilots to fall asleep quickly. The trick is to physically relax and then mentally relax. If you can keep your mind clear of thoughts for 10 seconds, apparently you will be asleep. 96% of pilots who had been trained on this technique by Winter were able to fall asleep within two minutes or less.
Recently I’ve started studying maths using a Jupyter notebook. What I do is to review my coursework, and while I go through each theorem/proof, I code it up in the IPython notebook and experiment with the maths. I’ve found it a good way to come to grips with complex ideas. I’m currently studying Dirichlet Characters, and am finding the technique invaluable.
The key to fast improvement is to iterate as quickly as possible and have good feedback loops built in from quality sources. This is why sketches, outlines, designs, and rough plans are so important. The goal is gather as much feedback as possible, and then to iterate quickly. When enough information about the optimal solution is gathered at a particular level of granularity, then drill down into greater detail. It’s essentially the gradient descent algorithm applied to life.
“My attention span is so poor today that my train of thought is now a replacement bus service.”
From @NickMotown on Twitter
Last night we went to the British Library to see an event called “Sounds of the Sky”. It was fun being in the British Library and seeing some of the rooms, but the event wasn’t that great. The first part involved lying on our backs in a big room while various audio recordings played through some speakers overhead. The audio tracks – mainly interviews – were good, but the sound itself was a bit muddy and neither Helen nor I could make out some of the dialog.
Continue reading “Sounds of Sky”
I just tried the Deep Music skill on Alexa. It generates AI music – which sounds pretty much as you’d expect. It’s a bit repetitive, but not too bad. This is an area of Machine Learning that will get a lot better in the near future. So, my AI voice assistant can now play AI generated music at me!
Lately I’ve been doing a lot of work on automating my life. It’s been a lot of fun! I’ve been using Python and Jupyter a lot to create scripts to make myself more productive. I have been customising my notebook to create an optimal work environment. I’ve been setting up my Emacs environment to make it more effective. It’s been nice having the space to do this!