I think people do, but they don’t. I put an example. In Los Angeles they had what’s called the coordinated entry system [una fórmula para conectar personas en riesgo de exclusión con servicios de apoyo económico y de vivienda]. It is supposed to consist of two algorithms: one that assesses the degree of vulnerability and the other that relates to resources. Well, the first was just a poll and the second was a guy sitting in a chair, doing manual work with a Google tab open. But it was sold as a complex and intricate system so that no one would complain.
It is true that there are some complex tools, such as those that use artificial intelligence to train self-taught systems, without human intervention. But in those cases, what I’ve found in my research is that people, while they may not understand how algorithms work, know what their effects are on their lives. And they can make really good assumptions about how it works.
What kind of assumptions?
Another example. The governor of Indiana signed a major contract with IBM to automate eligibility systems for all of the state’s social programs, and before the contract was signed, people already knew what was going to happen. Process automation is a way to not talk about the problem; Digitization breaks the relationship between social workers and users, dehumanizing the process and making the benefits more difficult to access. That’s what people said is going to happen and that’s what happened.
This contract cost Indiana a billion dollars and it was so bad that people forced the governor to cancel the agreement. IBM sued the state for breach of contract and ended up winning $50 million in damages. It had a significant economic cost, but the social cost was even greater. 15 years later, they are still working to make up for people who have lost their benefits. One of my sources within the government told me that if they had wanted to design a system to systematically deny benefits, they could not have done better.
Do you think we lack information guarantees about these tools? In Spain we have the Rider Law, which recognizes the right of platform workers to know the operation of their algorithms, but users and external workers are not covered.
I believe that instead of giving us information so we can passively receive it, what these tools need is to listen to the voices of the people affected by them. When we design it, we deal with programmers, office workers, politicians, academics… The conversation tends to quickly become abstract and philosophical questions arise about whether robots will steal our work, but we don’t ask ourselves how they affect the daily life of a utility-dependent person. This abstraction is part of the problem, as it prevents us from recognizing the true impact of algorithms.
Recognizing the right to citizen participation is a pending issue long before the arrival of algorithms.
It would be a big change and it wouldn’t necessarily work. In the United States, we have a major cultural problem based on the belief that the poor make up a small part of the population who are also responsible for their own poverty. Even many poor workers believe this and would prefer to design the most punitive system possible to oppress the poor. We need to fight these beliefs and make people see that many of us are close to the brink of poverty. But it is not the truth that we are willing to hear.
He mentions in the book the option of creating a Hippocratic Oath among the people who design these tools. How will it work?
I just changed my mind about that. I thought the book’s readers would be politicians, public sector workers, designers, and technicians, but they’re not. People affected by the system have read me. It’s interesting because, all too often, from the publishing world, we assume that people asking for benefits don’t read. But this is not true. Usually, when you write, you do so by thinking of the audience that you think has the solutions and you are wrong. I no longer think the solution is to make engineers more empathetic and sensitive. Technology is the new finance, so imagine someone in the 80s was raised to fix economic inequality by making brokers more likable. It would look stupid, wouldn’t it? Well, this is what it is now.
I think the solution is where it has always been. Nothing is achieved without protest and I look with great optimism at social movements, communities and neighborhoods that have the potential to change policies, as happened in Indiana. It was an old-fashioned protest, with dissidents taking seats and taking to the streets. It has succeeded.
Perhaps there are many people who think that to fight high technology, you must be hacker. but not.
No. It always helps to have a file hacker On board, for its expertise, but it’s not the only value. In fact, I think the experience of a hacker It can be a little limited. There is an engineer from New Zealand who was designing a tool that was rejected and purchased in Pittsburgh. This engineer wrote an article in which she claimed that data scientists will eventually replace the entire bureaucracy, because what the bureaucracy does is collect information and give it to the right people at the right time, but this process often fails. If politicians are able to get the right information at the right time, she said, eventually everyone will agree on what needs to be done.
She is a very smart woman, but very stupid when it comes to politics. Information is power and we wouldn’t necessarily agree even if we had the same data, because politics is a human and controversial task of evaluating data. His opinion was naive and criminally simplistic, as the answers that these tools, which have an extremely short range, could give, were simplistic. I think algorithms tend to shrink the problem down to fit the current solution and what we need is to keep problems as large as they need to be treated as such.
I’ve commented earlier that there is a desire behind algorithms in social programming. We’ve talked about the role of these tools in public policy, but what about social networks? We recently got acquainted with the Facebook papers. I may have been asked this many times, but are we walking in a world like 1984?
You really are the second person to ask me! Facebook is not a friendship company but an advertising company and we shouldn’t be surprised when such leaks surface. People should be aware of the role these platforms play in social control by the state. We may walk in a world like the one described by George Orwell, but do we choose it? It’s true that we’re free to decide to share our data on Facebook, but if you need general help, you can’t choose whether or not to ask for help with eating.
Is there a relationship between these social platforms and public policies?
There are many links between the decisions made by the system and social control. This is not new to platform economics. Economic assistance is increasingly politicized and as it is easy to connect our lives and identities with our social and economic needs, the link is getting stronger. The exchange of information between the monitoring system and the assistance system is very worrying; It’s important to keep them separate even if they aren’t already. Before, in sanatoriums for the homeless, there were people who were taken away by the police.
Now do we carry the police in our pockets?
In countries like the United States, poverty is highly criminalized and things to do to survive are considered illegal, especially for the homeless. So when you ask for help, acknowledging the reality of your life is incriminating. They are systems that talk to each other, which has very serious consequences, especially in how social workers understand their jobs. Some of them told me that in the past they thought they were accompanying people through shock, but now they see themselves as detectives. Instead of escorting the police is dangerous and this has been intensified with digital tools.
The book was written before the pandemic. Do you add anything?
The pandemic has made the limits of these tools very clear: There is no country in the United States where unemployment systems will not collapse. The biggest lesson of the pandemic is that all those things we thought impossible could become real at any moment. Now we have to think about how to respond to the climate that has arisen. I don’t want to fall into the cliché of saying it’s a political opportunity, because it would be inhumane, but we are facing a very interesting moment. I am a conscious optimist, because I am confident in the potential of social movements and believe that we will be able to face what is to come.
This was to be the last question: Do you consider yourself an optimist about technology?
I don’t think optimism is a characteristic, but rather a practice. One of the great lessons I’ve learned from the people I’ve worked with: Although they’ve been through serious life situations, they are optimistic and can be fun, generous, and dedicated to society. In the face of this, for me, lack of optimism is betrayal. But it is not a naive optimism. My mother told me she doesn’t know how not to get depressed by what I see in my work, but I think I’m very fortunate to see people fighting and still think things can be turned around. And technology can be helpful for that.
“Pop culture advocate. Troublemaker. Friendly student. Proud problem solver.”