AI News, Brazil to create national artificial intelligence strategy

IGF 2019 – Day 2 – Raum II – WS #36 Data-Driven Democracy: Ensuring Values in the Internet Age

So together with the panel, experts, we will discuss the above and hopefully more questions and consider different positions to analyze the actual influence of AI, algorithms and filter bubbles on our society.  We want to support a dynamic presentation and discussion of the main diverse points by interacting with you, the auditorium, and also with the online community.  And we will reserve 60 minutes of the panel for the participation of all of you and the exchange with the auditorium.  And this online and on site.  So if you have a question, please go to one of the microphones.  And then please tell your name and your background and then raise your question.  We tried to integrate the questions from the audiences during the whole session.  But within the next 20 minute, I'd like ‑‑ and I have the great pleasure to introduce our panelists.  Or better, I will give them the opportunity to introduce themselves.  And therefore please keep in mind the panelists do not extend three minutes for each individual introduction and a brief statement.  So that we can dive into the discussion with the audiences right after.

NADINE ABDALLA: Okay, thank you.  So I'm working basically on social movements and approaches to democracy and social and political transformation, and this is what we have been witnessing lately.  As for the question, I think I will think of a contradictory statement or much of a paradox that I would like to raise here in the debate, which is that social media has supported, of course, democratic ‑‑ has supported uprising.  It was a significant tool for mobilization during that uprising.  It has also helped a lot of movement to mobilize like in Spain, like in Black Lives Matter in the U.S., during also the Arab uprising.  When it comes to building a democracy, when it comes to building consensus, social media didn't appear to fare well.  So in this case we can see that cluster of like‑minded people who have been formed all over the world, this has been experienced in Egypt after 2011 where there was more mobilization of fear, mobilization of consensus via social media because of the formation of cluster of like‑minded people that are really interacting within themselves.  In Syria this has been witnessed as well where we have seen the building of narratives of fear and the violence and also sectarianism via social media.  So at last I would say that, yes, it is supporting mobilization.  Yes, it is supporting a channel for grievances.  However, when it comes to the time of building a real democracy based on institutional and conventional politics, in this case we can see a certain paradox which is the formation of clusters of like‑minded people, the algorithm of social media and so on and so forth.  And in this case it's not so helpful to mobilize consensus via social media.  Thank you.

CARMEN: Yeah, thank you, Tobias.  So I must say I've been, prior to working for Honda also professor of technology for social networks.  I have a long background on researching new technologies that might take influence on how societies interact.  And somehow my perspective is that still technology is one driving motor into how the interactions take place.  I mean, because you have Twitter and your Facebook, you can interact that way.  And that's why it's interesting to see what newer technologies might be out there that help us in maybe solving the problems nowadays.  And there are some solutions that, for example, allow to come up with ‑‑ to do process data to analyze data without having the data.  So topics like privacy‑preserving computing based on encryption, based on encrypted secrets and differential privacy.  They all deal with the laws of encryption.  But the main idea is that before sending out your data, you encrypt it or you somehow distort it in a way that the receiver cannot see it, cannot interpret it, but still can do valuable and reasonable calculations on it to, for example, create statistics out of it.  So that your individual data is not recognizable again, but the data of the community is reasonable, the statistics that you get out of it.  And another point is also communications in the social networks take place.  Because like Nadine said, it is taking impact on how the democratic movements evolve.  They are also research fields going on to analyze what can we do better.  Because one of the problem fields that I and we analyze in the institute for Internet democracy is that one big challenge is that you always have this linear view of the augmentation.  So if you have 100 times bad argument, it's overrunning the good ones.  And if you would have another visualization of the arguments and the facts and the points that are discussed maybe in a mind map, I mean, there are other options to visualize such a conversation.  It would help the people then to really identify also the less and rarely said good arguments that might change the discussion.  So technology and research in new technologies that allow privacy‑preserving computing and also to support democratic discussions on the Internet would help us to overcome maybe the challenges that we have nowadays.  Thank you.

JESSICA BERLIN: Thanks for that.  So as Tobias said, I am not an academic.  I'm not a researcher.  Rather I am a practitioner who is constantly asking the question how do we leverage and connect the resources and infrastructure of the public and private sector to solve global challenges?  And in this space, to answer your question around what does this mean across the world, it means that context is everything.  When we talk about data‑driven democracy and what is it going to take to build an inclusive, sustainable, digital infrastructure to enable an equitable society, that means a different thing in Germany than it does in China than it does in the U.S. than it does in Sierra Leone, Zimbabwe and Brazil.  Because depending on who your government is depending on who is holding the veto cards in their hand to determine what happens in the regulatory environment and who gets to own data and use it for what ends completely and fundamentally changes how you want to ‑‑ how you want to build solutions and strategies to this.  So coming from the broadly stated western world, we are having different debates about this than activists and digital innovators in countries with less robust democratic institutions.  So this is an issue that needs to be addressed uniquely and context‑based in each region.  And we have to have as a starting point in these discussions, regardless of whether we're policymakers in the private sector or in academia, to recognize who are we answering this question for?  When we talk about ensuring values, whose values, and who decides?  And how do those values differ from place to place, and how must that inform our strategy?

JESSICA BERLIN: Yeah, that's an excellent question.  In a single word, ask.  Inclusion.  You need to ask the people you are ostensibly designing for, right?  And if you don't know who that is, ask.  Find out, you know.  Country by country, sector by sector, context by context, you know, ask yourself, who is already active in this space?  Who do I know is active in this space, and talk to those people and find out maybe who do I not know that I don't know?  What don't I know don't I know?  Because this ‑‑ when we're coming from large multilateral or bilateral institutions and on this big global macro policy design level, we often don't know what we don't know, especially when we're talking about grass‑roots innovation or digital communities where there's just such a gap between the culture of those organizations and the culture of the large institutions.  So, you know, find people who build bridges between those spaces and find out who you should be talking to that you haven't.  And, I mean, I see this so many times because in my function through Costruct, building bridges is what I do.  You have a partner from a major institution and they don't know the key local players working on it, but they've been in there for months and months or even more than a year.  And so those conversations aren't being had.  People are not being included who are actually key to the process.  So even the fact that you asked that question is already an excellent sign.  And encouraging your colleagues and other partners to do the same.

MATTHIAS KETTEMANN: I should comment on that.  I think that the potential of data for development is really untapped until now.  If you talk to data development experts with a lot of ministries, they perhaps do not grasp exactly how they can use data, especially open‑source data, to make their development policies more and more efficient and more effective.  I've come across so many great examples where people from local communities use, for instance, axis mapping to show where the sewers were to make them ‑‑ to put themselves literally on the map.  Because we should never forget that when we talk about, you know, data minimization and the importance of who has access to data, this is coming from a very privileged position, you know?  We have so much data that it's almost dangerous if people ‑‑ if, say, know too much about it.  But also a huge part of the world who has ‑‑ who doesn't produce data who ‑‑ which is nonexistent in the datafied sphere.  Simply put, if you are not visible to your state, you will not receive a license.  You will not receive a birth certificate.  You will never receive money.  In certain societies you won't be able to board a bus perhaps in a couple of years.  So producing some data can also be extremely beneficial for development.  So we have to keep that in mind.  That data really is nothing good or bad.  It is always and ever what we do with the data.  And especially in development policies, I think we have to critically think about how to use data in a better way.

MATTHIAS KETTEMANN: A brief comment on that.  I totally agree that it might have very positive effects to conceive of such a global identity.  However, there are also, especially, I think, in certain societies, huge issues with the idea of a universal database.  And our history, I think, has shown us that databases are never, ever safe.  So we would have to first really think hard about the technology to be used.  There are alternatives.  You know, there's this great project called three words are not mistaken which allows you to localize yourself on the whole globe just using three words.  They have a database of three words, and you can pinpoint every one meter, one square meter in the world with those three words, which is an alternative to traditional geographic location and could be used by the people who are disenfranchised by, for instance, not living on the streets which are mapped.  So we still, I think, need to conceive of such a notion for digital identity before we can proceed.

Now, he was also a victim of computer‑generated scale campaigns.  And this eventually led him to ‑‑ him and his team towards trying to figure out if, well, this was based on AI.  AI was created based on data, of course, massive amounts of data.  And that's how they started discussing ideas for an AI regulation.  I, as a concerned citizen and as someone from the same state, I am ‑‑ my experience is relevant to this case.  I volunteered.  I approached him.  I wrote a report commenting on the bill flaw, and now we are discussing it.  But what is essential to this discussion is that the use of data for AI, or let's say the misuse of data for something that is inherently against democracy, it is data used to generate industrial‑scale computer‑generated sounding gibberish in this information.  This is affecting our democracies.  And this is having a feedback loop on our regulation and our legislators who are struggling to frame the situation.  And this may lead us to a future of reduced development because if our laws are being created based on these experiences, we may be endangering development.  The innovation in AI and in other fields.

CARMEN: I would also like to say some words on that because as computer scientist, what always puzzles me is there's lots of discussion about I don't like this.  I don't like that.  However Facebook is to be used and what effects there are.  But what is missing is somehow counterdesign.  How should the usability, let's say, look like that allows you to check what data is stored?  Or how should the cookies on the websites that you visit should present to you differently?  So what would you like to have?  As a computer scientist, I'm always very happy if you come up with a design document, this is what I would like to have, and then they can implement it.  Because at the end, everything is doable, everything is programmable.  And if you don't like it, then either find somebody who can do it or describe what you would like to have.  So coming back to that, also to the point ‑‑ to the usability and to the laziness, I mean, of course, there are different interests involved.  So on the one side, the companies would like to collect the data.  And still they have to provide you the option to opt out.  And if you don't opt out, it's still your choice.  So how would you like to have that choice implemented, if not that way?  So there's always ‑‑ my impression is that there's some creativity needed, how to really get all these requirements and all these cases that you would like to have matched for people who don't have ‑‑ want to have their data collected for people who are oblivious about it, who don't care.  So how would you like to have it all in one big technological solution?  And if that's doable, then it's easy to program.

Data (Inaudible), on the other side, it's a very fluid idea that you, either as a state or you as an individual, are able to use the data resources that you produce and that you need to take decisions in a way that is not dependent on other entities.  So, for instance, it would be a violation of the concept of data serenity if you had no possibility to get the data back from companies which you use.  So data serenity, I think, is one of the key notions for the future.  Even though we don't know, of course, get quite how it's going to work out, that no doubt rights which you can trace to data serenity.  But I believe that this idea of re‑establishing ‑‑ re‑establishing sovereign decision‑making on how to deal with your data is important.  And that's why I'm not quite sure that using the word 'lazy' is always the best one to do.  I think people are just people.  We use statistics to make decisions.  The world is complex and we have a really limited time.  Therefore, I think it's better to think about how we can nudge people towards the right decisions.  The best example being, of course, privacy opt‑ins versus privacy opt‑outs.

GUSTAVO PAIVA: I think Jessica hit the nail on the head.  That disruption is a business model.  And businesses can actively try to stay ahead of regulation and try to capture a market before a regulation hits.  So that is a point.  I also think that many governments in the world don't really have an interest in this common idea because really it is about making technology for your own reality and for your own national industry and so on.  It could also raise some questions about security and centralization.  So maybe we don't want, you know, country A, B or C to have such a central role in AI.  It is also good to keep in mind that AI is a highly dynamic technology.  It has existed for quite a while, decades now, but it goes through winters and then periods of rapid development.  So it really is an unpredictable technology.  Maybe in ten years it will be completely different from now.  We are still trying ‑‑ much of the debate we have today about AI is more specifically about machine learning.  And we are still struggling with the implications.  So I think even if it was desirable and possible, I don't think the world and the countries are even ready to have this discussion of a minimum standard yet.

And we don't need to reinvent them, you know.  So when we talk about we don't have minimum standards, that just means, well, we haven't quite clearly established how exactly certain kinds of sectoral use of artificial intelligence, for instance, can be done in a way not to endanger large datasets.  For instance, AI in hospitals or AI in military technology.  So I think we should be careful not to convey the expression that we are entering a no man's land of regulation.  We have laws.  We have standards.  We have soft law standards.  So we're not entering an unknown world.  It's kind of ‑‑ a thing that comes back all the time in Internet‑related law discussions, you know.  We don't need to reinvent everything.  So first of all, don't believe that there's no rule just because a technology is new.  And also it is not quite sure that you can't regulate for the future.  It's difficult.  It's difficult, of course.  But just think about the general data protection regulation and its right to have access to the logic of a decision by an automated decision‑making system which was objected.  Such a rule that hasn't been implemented yet very often in front of courts is a good example of how you can provide for technology neutral future‑oriented regulation.  And the DPR is a success story.  The California bill is basically a copy of that.

ELKE GREIFENEDER: When they say users do not adapt to technology, it means that, yes, I mean, there's no way ‑‑ we have to build a system, but we always have to keep in mind that they will use it as it fits them.  And a very practical example are dietary apps.  And there are a lot of studies on people who suffer from anorexia but do not know there are apps to help them.  What do they use?  They use dietary apps to overcome anorexia, which means every time they are gaining weight, which is good, the system tells them that's bad.  But they're still using it.  So they're taking the pieces out of the things.  So to come back to that question, I think what we have to do is we shouldn't just throw the system out and then say here it is.  Great.  But have more like ‑‑ we talked a lot about uses and design, and I think the term that's more coming on is more like core design because you invite the users to give nice feedback, and then you say hello, good‑bye, users.  Thank you.  Now we finished developing our product.  Whereas code design is more a longer process where you keep being in contact with users, you keep monitoring what actually happens.  How do they use it, and how maybe we need to adapt?  Does that answer the question?

MATTHIAS KETTEMANN: I think you really hit it on the head here.  We first have to ask the right questions.  What we are doing here is really great.  And we have to go back to the toolbox of regulation.  Again, we don't need to reinvent the wheel.  First of all, how important it is not to reinvent the wheel so we can go back to the relational tools we have, but they should be informed by new insights into how humans interact with technology.  We haven't talked about affordances yet.  What do products make us do.  And how can technology shape how products make us do.  So the pull into the void to enter something in a linking box if the program asks you, so what did you do today.  These are aspects we need to take into account when drafting those rules, and those rules need to be very smart, which is a problem because, you know, societies are getting progressively smarter and parliamentarians, too.  So yes, sometimes policymakers are not the most, let's say, the most normative ones.  They can because that's not their specific role.  Parliamentarians are here, have gathered for the first time at IGF from across the world.  I've talked to them from Ghana, from the U.S., from Kazakhstan.  So it's really great that they are here, and I think we're going into the right direction.  Perhaps that's just my sunny Austrian nature.

JESSICA BERLIN: Yeah.  I think ‑‑ you know, touching on what you just said, you know, engaging the users in the design process.  And this links back also to previous comments around inclusion.  You know, reaching out to communities that are not already in online fora and giving feedback on digital tools but going to rural areas, for example, or to older communities, you know, elderly citizens, for example, and engaging with them and seeing how can our products, services and technologies help solve their problems?  I mean, understanding the user is at its core and understanding their context.  And I think in this way, you know, as the question was formulated, how do you help bring society to the technology rather than the other way around?  You know, this also ‑‑ when you can show someone that this new technology doesn't bite but is actually going to solve a problem for you, it's going to make your life easier, then that incentivizes engagement and incentivizes views.  So you need to ask yourself what problem am I trying to solve and then when you've identified that who, ask yourself who's not in there and should be?  Who have we not spoken with that we should?  And then designing accordingly.

Or if we're talking about education and school systems, teachers, et cetera, being stuck in their ways, well, that's where the incentive has to be getting a little bit harder and also creating disincentives to not changing.  You know, if they just want to be a bit lazy and keep doing what they know, don't want to be fussed, that's where the government has to step up and enforce and say, if this is the new standard that we're using in our school system, then you have to do this, or you'll lose your job, for example, as an extreme case.  But also, you know, making it easy for people to adapt or as easy as possible.  You're not just saying hey, here's a new system.  Now you have to use it.  But giving trainings, workshops, gathering their feedback, you know, rather than just dumping something, supply driven in their laps and say now you have to use this.  Making sure that they're part of the process from the beginning so both the teachers and the principals and the school administrators should ‑‑ and the students, what are your current haves, needs and wants and what's working and how can we co‑create using these digital tools out there?  Because I think if people are part of the process from the beginning, then it feels like okay, do this now and hey, co‑creating this new system.  They've at least have my feedback.  I know what to expect and what's coming.  Yeah, thank you for that question.