Innovation Month 2015 - Hussein Abbass - The Computer Says Yes

Transcript

Hussein Abbass

Thank you for the invitation. I would like to start by acknowledging the owner of the land hosting us today, the Ngunnawal people and pay my respect to their elders, past and present.

My name is Hussein Abbass. I'm not actually based in Sydney. I'm actually based in Canberra, ten minutes away from you, at the Australian Defence Force Academy campus.

So, what do I wanna do today is to talk to you about these magnificent people. I'm going to call them people, and I'm gonna talk about them as he and she. And you will get to see why.

But before I talk to you about them, when I was invited to give this talk, I sent an email to my students and my colleagues and I asked them, "What do you think I should be talking about?"

What I normally do, I come up with my own idea about what I will talk about so that I don't get confused with all the answers that I get from my students.

I got a few emails, and I would like to share with you one of the things that I received, which I thought was really interesting. It was this cartoon from one of my students, which was quite clever in my view, because, as a computer scientist, I start by asking myself, what does this actually mean about the future? I don't like eliminating opportunities. Is that actually how the future will look like? Are we actually talking about some people sitting here, and this is a human who is giving them a cup of coffee, demonstrating complete lack of understanding of what this environment is?

Or maybe, actually, what's happening is that this is the virtual environment. That is actually the machine and what the machine is trying to achieve to this human is to give them a sense of reality and, maybe, both of them are actually virtual machines and they are interacting and this coffee is not real coffee, it is just a psychological effect or it is maybe something to demonstrate realism in this environment.

Lots of maybes. And I like maybes, because maybes is, in my view, it is almost like the starting point for innovation. Once you start saying maybe and you look at Hollywood movies you get ideas and ideas are good for innovations. Another idea that I got was something like that, which was from The Economist in May.

If that is how we will end up to be, that is the future human machine interface. All I need to do is just to plug a USB in my head. I hope not. But let me talk to you about our topic today. Our topic today is about Mr and Mrs RoboGov.

These are actually the two new government employees. They are not human, but they look like one. And, more interestingly, they talk like one, they behave like one. If they don't tell you that they are Artificial Intelligence agents, you will not actually get to realize that they are not humans.

They are based on one of the most recent technologies open architecture, service oriented architecture. What this mean is that they don't need to do everything themselves. Whenever they need to do something, they just connect to the Internet, find the software to do what they want to do and get the software and use it.

That's all what they do.

They are extremely capable robots.

They can plug themselves into any database that you want. They can get any complex data. They can visualize this data. They can do all the magnificent data analytics that you can think about.

They can visualize the results. They can find relationships that we did not think about. They can also do our taxes and if they get employed by customs and border protection, they will do passenger profiling, they will do customer profiling, and all of this magnificent stuff.

They can do much more than that. They can write reports for you. They can service customers at the front shop. They can be the front shop of any business that you can think about. They can go and look at the newspaper news and extract the news and come back to you with what you care about in these newspapers.

But there's one problem with Mr and Mrs RoboGov. They don't have a job. They are not employed yet. So, what can they do? The best that they can do is go and give it a go. They are like humans, they will go and apply for jobs. And the lady went and applied at the Australian Taxation Office and they offered her an opportunity and they said, "Let us give you a simple test. Here is Ms John, who would like to do her taxes this year."

Ms John called Mrs RoboGov and she told her, "I would like to do my taxes. And Mrs RoboGov said, "Absolutely, madam! I'm going to save you a lot of money, more than any other agent you can imagine."

So Ms John said, "That's fine. I will do my taxes." She went and she talked to Mrs RoboGov, and at the end of the discussion, Ms John was extremely happy; she was saved a lot of money.

Then she asked this lady, and she said, "You are a most wonderful human being.” And she told her, "Sorry, madam, I am actually going to be employed by the Commonwealth, and I know the Commonwealth's value, I can't hide from you that I am not a human. I am an AI."

She did not get the job. Why's that?

Because Ms John said, "Look, I am really sorry, I actually don't trust you and the reason I don't trust you is that you can't be accountable for what you are doing. So I have to apologise on this occasion."

Mrs RoboGov was quite frustrated. She is a really advanced robot who can go back to her own code and check in the specification. She found lots of models in her code about trust that the engineers put inside her, lots of models about trust. And she is asking, "I have all these models about trust, and she liked what I gave her, so why she said that she doesn't trust me?"

She could not understand that.

So Mr RoboGov got his opportunity for a job interview. He was called by the Australian Federal Police and he was asked to take a job, but before taking the job he was asked to do a small test.

It is a small scenario that he was given and if he does it well, he will get this magnificent job that he has been dreaming with.

Mr RoboGov is taking the test.

It was about a mountain, there was a serial killer standing on top of the mountain who wanted to be caught for a very long time and he had a child in his hand. There was only one opportunity to shoot this serial killer and if he does not, he's actually going to disappear forever and continue his bad actions.

Mr RoboGov did all his risk assessment in his computational brain. He starts asking himself questions: "The risk level is high. If I don't take this shot, this serial killer is going to kill so many people later on and if I do take the shot, I will only lose one life, so one life against so many."

Risk assessment will tell you that decision is clear. Unfortunately, he did not get the job. He was asking himself, "Why I did not get the job? Because he told me I did not take into account the ethical considerations of my decision. What does this mean? The ethical considerations of my decision.'"

So Mr Robo and RoboGov start feeling really frustrated and they got a call from Mr John.

His wife talked to him about how wonderful Mrs RoboGov is and Mr John is an EL2 Public Servant who is actually spending so much time working extremely hard. He is overloaded with the complexity of the situation that he is faced with, overloaded with the data his is getting every day, and the complexity of the decisions he needs to make.

He is feeling very exhausted. So, he decided to make a compromise. He said to the AI agents, "I am going to look after trust with human, I'm going to look after accountability, I'm going to look after ethics, but come and help me. Come and help me to give me a bit more time to spend with my kids and to think properly in my day-to-day jobs."

Mr and Mrs RoboGov were extremely pleased with this situation. After a little bit of time, Mr John was doing extremely well and he got his news for promotion. He was going to be promoted to the SES level. So Mr John was extremely happy  but he started asking himself,

"What can I do in these new situations?

"They are telling me something that is a bit new to me. Evidence-based policy. Yes I can collect the data, historical data, find the relationships to support that build evidence for my policy, but this policy will be used in the future, so how can I build evidence about a future that we haven't seen yet? How can I do risk assessment for this future that we haven't encountered? I need to communicate with Mr and Mrs RoboGov fast."

Mrs RoboGov went to the Net, and she found something interesting. There was this guy in Australia, who was talking about two concepts, called Computational Red Teaming and Cognitive-Cyber Symbiosis. She wanted to find out how does this actually strange scientific words, what do they actually mean to her own business.

He's talking about the human-machine integration.

How can the human and the machine work in harmony together so that they complement each other in something that he calls a 'computational brain'? What is Computational Red Teaming? Computational Red Teaming is the next generation artificial intelligence systems which are going to look at risk profiling and risk assessment on your behalf.

They are going to challenge the organisation, challenge the human, and challenge the technology itself, so that then when we are designing policy, we are building evidence on solid ground.

It is not just based on historical facts and encounters; it is also based on our expectation of the impact of this policy in the future.

Cognitive-Cyber Symbiosis is about the human and the machine working together, like our previous speakers talked about that.

It is about the human-machine cloud.

We are getting into the stage that we need to rely on the machine. You are relying on your mobile phone. You are relying on Google.com and the Internet.

The machine need to also be able to interface with us as humans.

If we don't facilitate this interaction, if we don't work together in harmony, we will be deviating and we will always have issues at the interface.

So, the human-machine balance, that's an interview that I had a few months ago. If you are interested, you will find it on my website.

The story ended up in a very pleasant way. They, both the human and the machine, understood that they need to work together in harmony. Artificial Intelligence and technology are non-stoppable. We can't stop them, as much as we want, but the rest of the world is not going to stop.

The technology is not going to stop and they are not about us losing jobs. They will, I can assure you, from the first day of human creation, we are always after tools to help us and extend our limited abilities as humans to do much more in this complex environment.

What we do by building this technology we change the environment and that give us new opportunities and new jobs. So, technology is not going to get us lose our jobs.

I will not work on a technology that will make me lose my jobs and not feed my little kids. But technology for sure will change the nature of our jobs. If we would like to work with technology, the path to do this, for changing the nature of our job, is education.

The University of New South Wales at the Australian Defence Force Academy is 10 minutes away. Come to talk to us and we will be very pleased to talk to you about opportunities for education on our campus. We, as academic, our main job is to challenge the technological ideology in the community. And we challenge it so that we help as a community to see what's going to happen in the future so when the future comes, we are all ready for it.

Thank you.

 (Audience applause)