I’m really not sure what to think about the SI. They seem to exist in some no-man’s-land between kookiness and practical planning for the future. Dr Novella of the SGU tried to get Vassar to explain how we can expect to allow for AI tech to recursively self-improve without constraint and at the same time be reasonably sure they won’t turn on us and use humans for batteries.I’m not sure that Vassar explained that very well. He seemed to be jumping back and forth between different semantical games – that creating AI so that it couldn’t follow a certain path once it begins to improve itself isn’t really constraining it. But then again it’s probably more likely that I’m just too dumb to get it.
The interview portion of the show linked to above starts at about 26 minutes in.