Ron Brachman: AI is what you make of it
We met the prof. Ronald J. Brachman at the FBK headquarters during the conference "The pleasure of research in AI", here is his perspective on AI as a long run researcher
Let’s start from the beginning: a definition of AI
There are many many definitions of artificial intelligence and, to my mind, perhaps the most important thing about intelligence is that’s not a single simple thing, it’s a multi-faceted thing and at the moment I think it’s beyond any easy definition.
There are people who focus on specific behaviors and skills that look like they can only be done by people or intelligent animals but they’re very narrow and specific and there are more general capabilities. Sometimes people call this artificial general intelligence.
That is where you can take for example an AI system and put it down in almost any setting that it has never seen before and have it do things that are reasonable, that exhibit common sense, that show a certain ability to reason and to predict the future and to act in a somewhat rational fashion. As for me, I like to think of an AI focused more on the latter, on the general notions of intelligence where you’re not trying to focus on a very narrow domain or very specific set of goals but not everybody has the same view.
I like to think of intelligence as more general, and the ability to support an intelligent entity in a new situation and to always do something reasonable and robust and to react actually when it sees and hears new things. So people sometimes wonder what an agent needs to know in order to behave intelligently. I think it’s a combination of things. There are certain built-in capabilities as the ability to envision the future in its head. If it’s going to interact with human beings, again from basic commonsensical ideas about the fact that humans have minds and intentions and beliefs and they interact in language. So. there’s a baseline of things that all humans learn by the time they’re toddlers that I think it’s very important for a machine to know in order for it to behave intelligently. Then, besides the things that it knows, it will need the ability to learn, the ability to reason, the ability to project the future, the ability to match up what it perceives through whatever perceptual mechanisms they have had in its knowledge base. Those are very important so it’s not simply a matter of knowing how things are, it’s also understanding a combination of what’s known, how to learn and the skills you use for dealing with that knowledge.
What’s the difference between doing research activities within academia vs within companies?
I’ve had the good fortune in my own career to work in several companies including some large global companies like AT&T and Yahoo. I’ve also worked in a very important United States government funding agency called DARPA that offers money to researchers in teams in large projects to do very advanced, very important research. I’ve also more recently had the opportunity to be at the university and the way the researches approach research in those different places is a little bit different although it’s exciting to me to see that,more recently, there’s been some convergence.
Typically, in the past, university researchers, faculty and students were restricted to artificial data sets or small problems that fit in the scope of a class, that were strictly defined by the research grant that they would receive from the government whereas in the companies, especially a broad international companies that have many lines of products and services against which they can do research and often, especially moderate modern times, have huge amounts of real world data that shows them what the real problems are. So, for a long time there was a divergence between university research which was more academic, not grounded deeply in the real world.
The companies had the data and the products and the customers where almost everything they did was focused on practical outcomes; recently we’ve seen a lot of collaboration, we’ve seen gifts in contracts from large companies into universities, we’ve seen university faculty go on sabbaticals and work in companies. We’ve seen people, even at my own university, who work half time at the university and half time in a company so I think this is particularly important for artificial intelligence where the theory is important, doing things in the abstract, on the white board, but the real world, what we say “where the rubber meets the road”, is really crucial for the future success of AI so I see, if not total,convergence, a coming together in many sectors industrial research at large scale and university research.
What’s your opinion about the dark side of AI?
So one of the hopes of AI, besides well known things as building robots, those things that people usually imagine or see in science fiction movies, or assistance to help people in professions like in the financial industry, which is a common place to see AI applications, is that researchers continue to think and grow their thinking about the ethical considerations in the technology and the ability to help people who are either living in underdeveloped countries or suffering from discrimination or simply lack of resources. AI is not a panacea, it won’t solve the world’s problems but because of its ability to process data and use common sense knowledge, and look very broadly at concepts, it could help understand and analyze problems in very poor areas, understand and help with getting resources from places where they are plentiful to places where they’re scarce in an efficient and cost-effective manner and that will in the long run help balance some of the social welfare challenges we see in many places andaddress poverty, I think the most important thing is that we get AI researchers to start thinking along these lines and to collaborate with social scientists and people who understand economics and social welfare and really address their attention there.
What’s the future of AI?
I’m not concerned about machines rising up and taking over the world or attacking humans autonomously, I think technology and science are so far from the level of autonomy needed and self learning and replication about people are concerned. It’s not that we shouldn’t worry about it, but it’s not our near term fear. What does concern me greatly is the same concern you see with many types of modern technology which is there are people in the world with ill intentions, who are out there to either cheat or harm other people or find ways to steal or gather their own resources without doing the hard work. Like any technology in the hands of clever people with bad intentions, AI can be put to really negative uses. It’s a little hard to tell what the underlying technologies are for some of the dark side things that we hear about. We hear about the dark web and matching of people buying and selling contraband or things that should not be allowed to be sold. There are issues of persuasion: we talk a lot these days about fake news and fake accounts and the fact that AI could be used to make an account on Twitter or Facebook that looks like it’s being operated by a real person, itdoesn’t have to be a very sophisticated AI. For example with Twitter, you’re communicating with a very small number of words or characters, it’s easier to mimic a human and there’s no question that there are people already taking advantage of waht you might call AI, or machine learning or just advanced computer science technology to fool people.
What we need to do on the bright side, on the light side is really look at how we can police AI better, how we could detect abuses or how do we start to install on an AI system mechanisms that are reflective enough to have their own affects so that they might stop somebody from abusing them, if you will. We’re far from being successful at that but I think it’s something that’s important to think about and while I don’t believe in a cataclysmic near future where AI robots take over the world it’s very clear that we have to start fighting it back in advance.