It’s been awhile since I delved into this interesting and ever more commonly discussed topic so I will go through this article I recently came across on Twitter. Written by Orlando Torres and published by Towards Data Science, which seems to be a branch of the platform Medium.
When new technologies become widespread, they often raise ethical questions. For example:
- Weapons — who should be allowed own them?
- Printing press — what should be allowed to be published?
- Drones — where should they be allowed to go?
An interesting way to start since one could argue that none of those questions have really been fully answered yet.
The most apparent answer in the context of the US is weapons. That has been an ongoing conversation for decades, and likely will continue being so (short of a miracle).
As for publishing, in these days of relative calm and first world freedom of speech, pretty much anything goes. However, narratives that rise to the top tend to be what is good for power. In the past, narratives good for power tended to be all that was allowed.
So again, we can bring ourselves back to the start. People angry about changing times (“I can’t say insensitive and bigoted things without people getting all triggered!”) certainly would agree. But I don’t align myself with self-serving idiots (even if we share 1 common thread).
As such, I say this. What tends to get lots of airplay is what is good (or at very least, benign) for those in power. Granted, modern-day social media algorithms have shaken up this notion a bit (based on heavy reliance on user inputs. More on this later). None the less, however, good for the commons is common is still a basis of most popular discourse within societies the world over.
Drones make things even more interesting. The readily available consumer units certainly pose a never before faced challenge to the privacy rights of pretty much everyone. Not even a 50-foot fence can protect you anymore.
Not to mention idiots using these things in aircraft corridors. Though most of the collisions have bounced off the aircraft so far, I fear what could happen if one of these units (with its lithium-ion payload!) ended up in an engine. Particularly jets that are throttled down (landing!) or otherwise at a low altitude. This airspace also tends to blanket populated cities.
You get the picture.
That conversation is pretty much finished, however. Few would disagree that only the most inept or careless would use a consumer drone in such a callous fashion. But then again, given the state of the conversation on mass assault weapons and weapons in general (in the US), one can’t even assume that a stake can be placed at the point of the common good.
After all, drones don’t kill people. People flying drones into aircraft engines, which then crash into heavily populated buildings and neighborhoods, kill people!
A nice segway into part 2 of this. The drones of war.
Nations utilizing these things on a regular basis certainly have answered the question that is “Where should these predator drones be allowed to go?”. Any nation full of brown people!
Fine, that was a loaded statement. Even though I don’t ever see these things being used to take out the scum of the earth in any first world nations, we can’t go there.
Waco served as a good template for that storyboard. First Timothy Mcveigh and the Alfred P. Murrah building, then Columbine, then Virginia Tech (and who knows how many other roots).
Of course, we don’t listen to this logic in the middle east. Leveling hundreds of homes and buildings and killing thousands of innocent people, then assuming that the ISIS phenomenon sprang out of nowhere.
Firing and releasing hundreds of trained Iraqi military leaders and personnel into a power vacuum filled with raging people craving ideological structure . . . who could have seen the endgame to this experiment coming?
Though important, that is just a side effect of the predator drone and unaffiliated with the question of where these things should be used. The United States and other western nations seem to agree that they are only to be deployed in enemy territory. I highly doubt the people native to these lands would agree, however.
Is it any wonder why they keep showing up in Europe by the thousands? We like to pick apart their presence and probable effects on contemporary society (mostly theoretical), but we certainly don’t like considering exactly WHY they felt the need to come in the first place.
Not to mention the financial angle. Most conservatives I know are all about cutting services to these refugees, but I rarely hear about the costs of the never-ending war that drove them there.
An amount that I suspect would make caring for all American citizen’s under a single payer system (with pharmacare!) look like a penny.
Or at very least, a one dollar bill.
The answers to these questions normally come after the technologies have become common enough for issues to actually arise. As our technology becomes more powerful, the potential harms from new technologies will become larger. I believe we must shift from being reactive to being proactive with respect to new technological dangers.
We need to start identifying the ethical issues and possible repercussions of our technologies before they arrive. Given that technology grows exponentially fast, we will have less and less time to consider the ethical implications.
We need to have public conversations about all these topics now. These are questions that cannot be answered by science — they are questions about our values. This is the realm of philosophy, not science.
Artificial intelligence in particular raises many ethical questions — here are some I think are important to consider. I include many links for those looking to dig deeper.
I provide only the questions — it’s our duty as a society to find out what are the best answers, and eventually, the best legislation.
While I won’t and don’t disagree, Artificial Intelligence is hardly a benchmark in the conversation that is technical innovation versus ethics. While a great many breakthroughs could fall into this category, one of the most obvious seems to be nuclear weapons.
There is NO upside to having them around, PERIOD. A war between Pakistan and India ALONE would be enough to effectively wipe out our species. Let alone the fact that the United States seems hell bent on picking a fight with someone, be it Russia, China or some other unforeseen player.
Though it could come as an unfair question (depending on the circumstances) . . . where were the philosophers during the Manhatten project? I can think of no position that is more against logic than mutually assured destruction.
Artificial Intelligence in the wrong hands (or if not properly managed) COULD go in a bad direction and turn out to be bad news for us. But at the same time, it could also become just another tool in the human toolkit of experimentation, innovation, and exploration.
1. Biases in Algorithms
Machine learning algorithms learn from the training data they are given, regardless of any incorrect assumptions in the data. In this way, these algorithms can reflect, or even magnify, the biases that are present in the data.
For example, if an algorithm is trained on data that is racist or sexist, the resulting predictions will also reflect this. Some existing algorithms have mislabeled black people as “gorillas” or charged Asian Americans higher prices for SAT tutoring. Algorithms that try to avoid obviously problematic variables like “race”, will find it increasingly hard to disentangle possible proxies for race, like zip codes. Algorithms are already being used to determine credit-worthiness and hiring, and they may not pass the disparate impact test which is traditionally used to determine discriminatory practices.
How can we make sure algorithms are fair, especially when they are privately owned by corporations, and not accessible to public scrutiny? How can we balance openness and intellectual property?
Algorithms can present biased results when programmed with biased data from the outset. A big problem that has to be addressed.
A french fry factory has a similar problem. The product they put out is low quality, mostly because the raw potato’s they bring in and process are substandard. But they are cheap and plentiful. How does one fix this problem?
Indeed, only part of the problem as outlined. But none the less . . . come on. You don’t need to be an ethicist or a philosopher to solve this enigma.
As for keeping an eye on proprietary algorithms, that is more of a challenge. It reminds me of the struggle that is keeping track of various wall street financial instruments like derivatives. The equations are often so complex and complicated that what regulation DOES exist is very hard to enforce.
Good to know that the fate of the world economy is in the hands of a bunch of probable psychopaths that haven’t learn a thing from events 10 years ago.
Fortunately, the 2 are not entirely identical. Mainly because one can keep track of algorithms just by way of the results. Keep tabs on these various outputs, and issue notices/warnings/fines/cease and desists as necessary. Even if one can’t see inside the black box that drives Bank A’s loan authorization process, you don’t need to. You just need it to follow established guidelines.
Yes, algorithms have already gotten away from us. By now, we have all heard about the mess that Facebook (and other social media platforms) have gotten themselves into. And there are other real-world examples of misused algorithms creating real problems for people.
I don’t deny it’s something we need to watch for since it’s ALREADY happening. However, I still don’t really consider it to be a big problem, simply because it seems fairly easy to fix.
Have the potato factory start bringing in higher quality potatoes, and have government regulatory organizations keep tabs on the outbound product to ensure it meets food safety standards.
2. Transparency of Algorithms
Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.
For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.
How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?
First off, deep learning. Normally I stick away from wiki articles, but in this case, I needed a way to get my bearings (if you will). Once you learn enough about a topic or concept, you can further search for relevant information. I generally do that by starting with individual aspects of a given topic, and then build up from there.
It was a method that served me well for most of my explorations into glyphosate, GMO’s and other scientific research heavy topics. But not so much with this. I have a feeling I am starting at chapter 4 of a complicated textbook.
So let’s try and back it up a bit.
Machine learning is all about using statistical techniques to help computers learn without explicit programming. In a nutshell, it’s all about predictions or decisions based on raw data inputs. It’s actually commonly used by many of us already in the form of email filtering and network intrusion prevention (your firewall), among other things. Since the threats and environment in both areas are always changing, the algorithm learns common behaviors and makes future judgments accordingly.
There is much more than that. But that seems a good start.
In theory, the goal for the whole of the field seems to be generalization from experience. Which makes sense of it’s close relationship to statistical analysis, being that it’s possible to make fairly accurate predictions in many areas based on statistical analysis. I would imagine that the long-term goal is to outsource this task to an algorithm that is both faster and more accurate than the average human brain. Not to mention cheaper in the long run.
Given this, and the similarity in the backbone structure of many AI networks to neural networks in the human brain, I would think that deep learning takes the initial task to a whole new scale. Rather than drawing on a handful of data inputs, this algorithm may be dealing with hundreds (if not more), all building on one another.
Most decisions made by humans are based on a multitude of different factors that we likely don’t even realize have an influence, so I imagine that the conclusions of these deep learning AI algorithms are similar.
Which seems to be where the problem lies in both the conclusions of the AI networks and their human counterparts. What influenced the outcome?
In humans, finding and diagnosing this problem is a huge issue in itself. The seemingly simple task of convincing people that not all bias (racism, sexism, or otherwise) is necessarily overt, is a challenge in itself. As I can attest to, having been one of those people.
This can also be backed up by the results of different tests that remove (or cloud) the identity of job applicants.
Blind Auditions Increased Women’s Participation In Many Orchestra’s
According to analysis using roster data, the transition to blind auditions from 1970 to the 1990s can explain 30 percent of the increase in the proportion female among new hires and possibly 25 percent of the increase in the percentage female in the orchestras.
Minorities Who “Whiten” Job Resumes Get More Interviews
In one study, the researchers created resumes for black and Asian applicants and sent them out for 1,600 entry-level jobs posted on job search websites in 16 metropolitan sections of the United States. Some of the resumes included information that clearly pointed out the applicants’ minority status, while others were whitened, or scrubbed of racial clues. The researchers then created email accounts and phone numbers for the applicants and observed how many were invited for interviews.
Employer callbacks for resumes that were whitened fared much better in the application pile than those that included ethnic information, even though the qualifications listed were identical. Twenty-five percent of black candidates received callbacks from their whitened resumes, while only 10 percent got calls when they left ethnic details intact. Among Asians, 21 percent got calls if they used whitened resumes, whereas only 11.5 percent heard back if they sent resumes with racial references.
‘Pro-diversity’ employers discriminate, too
In one study to test whether minorities whiten less often when they apply for jobs with employers that seem diversity-friendly, the researchers asked some participants to craft resumes for jobs that included pro-diversity statements and others to write resumes for jobs that didn’t mention diversity.
They found minorities were half as likely to whiten their resumes when applying for jobs with employers who said they care about diversity. One black student explained in an interview that with each resume she sent out, she weighed whether to include her involvement in a black student organization: “If the employer is known for like trying to employ more people of color and having like a diversity outreach program, then I would include it because in that sense they’re trying to broaden their employees, but if they’re not actively trying to reach out to other people of other races, then no, I wouldn’t include it.”
But these applicants who let their guard down about their race ended up inadvertently hurting their chances of being considered: Employers claiming to be pro-diversity discriminated against resumes with racial references just as much as employers who didn’t mention diversity at all in their job ads.
Given our tendency towards bias and our often inability to even realize this when making many decisions (including at times life-altering ones for others), it is indeed a bit scary to consider that a machine could be coming to similar conclusions under similar circumstances (that is, with the same dataset).
And yet, this is nothing new. This fear I would assume is based on the assumption that the human brain will come to a better (less biased) result. Which as we can see in the examples above (as well as many others), isn’t true.
Humans are far from immune to the contaminated results that are born of biased inputs. In fact, I would consider the human aspect of this problem far worse because few are watching for signs of this problem. AI and its conclusions will be under constant scrutiny due to the mistrust in it within contemporary society. But the same can’t be said for humans, considering that many don’t even realize (or refuse to acknowledge) that a problem exists in the first place!
When it comes to what drives human decisions, we are indeed in the dark. There might come a day when we will understand the brain well enough to map this out, but I doubt we’re anywhere near there. Which is a good thing. Because seeing into a mind is a scary thing to ponder. It’s pretty much the last private refuge that any of us have!
Unlike the human mind, I personally don’t see transparency in algorithms as being as difficult an issue to overcome as it’s being made out to be. If it’s something under our control, then it seems logical that one could add a caveat in the coding that enables a roadmap of sorts to what influenced a given decision. Or if that is not possible, there is always the nuclear option that is banning the use of so-called black box AI in high stakes situations (such as those involving employment, or insurance claims).
I still think that a lot can be attained from clean data inputs.
This strikes me as a good argument as to why much of this research should be socialized (or at least more transparent). Neutral researchers are more likely to start with clean data, AND to not overlook problems with the output should it be favorable to the organization they work for.
3. Supremacy of Algorithms
A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?
For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.
Should people be able to appeal because their judge was not human? If both human judges and sentencing algorithms are biased, which should we use? What should be the role of future “robojudges” on the Supreme Court?
Why it is that all of these seem to boil down to the raw data that the algorithm is working with?
To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.
Then it obviously shouldn’t be used by the algorithm in determining a sentence. I would have thought that to be obvious. Making such statements as “If both human judges and sentencing algorithms are biased, which should we use?” , ridiculous.
This seems to be a perfect example of a naysayer throwing away the baby with the bathwater. The assumption that the algorithm is inherently biased, and therefore garbage.
Though I would certainly say that for humans (even myself, really), I strongly suspect that biased inputs are strongly influencing the biased outputs being highlighted. The solution, of course, is ensuring a clean data feed. Which includes stripping external data that normal court proceedings would not consider in the first place.
Should people be allowed to appeal a sentence because the entity issuing it out was not a human? Of course.
Yes, this could become a common reason for appeal. But if the case is built on a strong enough foundation, one shouldn’t have anything to worry about.
Seeing how easily people can be manipulated by various emotional inputs makes me personally PREFER the thought of a trial overseen by a robot judge (assuming the bias in data issues are dealt with). Democracy is overrated.
So if I am given the choice between a jury of randomly selected participants sourced from a contaminated pool or a cold and unemotional machine that deals in the extraction of raw data inputs, I’ll take the machine.
4. Fake News and Fake Videos
Another ethical concern comes up around the topic of (mis)information. Machine learning is used to determine what content to show to different audiences. Given how advertising models are the basis for most social media platforms, screen-time is used as the typical measure of success. Given that humans are more likely to engage with more inflammatory content, biased stories spread virally. Relatedly, we are on the verge of using ML tools to create viral fake videos that are so realistic humans couldn’t tell them apart.
For example, a recent study showed that fake news spread faster than real news. False news were 70% more likely to be retweeted than real news. Given this, many are trying to influence elections and political opinions using fake news. A recent undercover investigation into Cambridge Analytica caught them on tape bragging about using fake news to influence elections.
If we know that videos can be faked, what will we be acceptable as evidence in a courtroom? How can we slow the spread of false information, and who will get to decide which news count as “true”?
First off, this has been a concern of mine for years now. Starting with friends of mine that I have coffee with fairly regularly.
Based on a steady diet of police brutality videos and commentaries by people on the topic, I began to notice how they developed a very anti-cop attitude. All based on videos that may not even be recent, yet they often didn’t check that.
The same thing applied when it came to various conspiracy theories. If I expressed doubt, up comes a video of some moron disproving chemtrails because when he walks in the winter, he doesn’t leave a 20 block long contrail. Yes, some of these people ARE that easy to fool.
These days, it’s all about Trump’s heroic task of bringing Hillary and Obama to JUSTICE for their horrible crimes. What did they do? Fuck if I know.
Oh yeah . . . SHE SOLD URANIUM TO THE RUSSIANS! Pizzagate!
For years, I have seen this story play out in people I know. The people I describe have a history of gullibility. But what I was most concerned about, was how these algorithms may affect those with a mental illness.
Then it all fell by the wayside for some time and I forgot about it. That is until 2016 happened, and suddenly the world realized how damaging these unregulated social media algorithms can be. Putting people into smaller and smaller niches may be good for screentime (and therefore, money. At least in theory), but it’s not good for a democracy. Add in many actors that have learned how to use these schisms to their advantage, and you have a real birds nest. Hopefully culminating in Brexit and Donald Trump.
In many respects, the cat is out of the bag now. I may know many of the various nuances that make up the fake news category, but for many people, your words are falling on deaf ears. When the president of the United States is openly using the term Fake News to tarnish an organization that doesn’t meet his approval, the appeal to authority fallacy becomes extra-strengthened.
Though social media has indeed done a lot of damage in the short time that it has had any influence on civilization, it may not be as irreversible as it feels. It’s not like the schisms erupted from a vacuum. Again, it was a case of a machine running rampant based on a tainted data pool.
I recently deactivated my facebook account on account to some of this. I was fed up with being bombarded with an endless torrent of stupidity. Because clicking and sharing are as easy as a single click or tap.
Reining this in becomes a touchy subject, as far as free speech is concerned. One COULD program so as to bury the fake stuff, or at least prioritize the legitimate over the trash. But then you leave yourself open to the many ways that could be manipulated. Imagine trying to break the Snowden story with algorithms programmed against you?
I propose the solution of tags.
Tags that make the origin, bias, and legitimacy of an article, post or meme clear. While these tags may be disregarded by a segment of social media users (the ones that one likely has no hope of EVER reaching, to begin with), it is not them that the tags are targeting. Who they are more useful for, are the casual social media browsers that read and post without a second thought. I suspect that if there were a stimulus present that brings up doubt in the legitimacy of the information, many may be more careful with what they post.
I envision it as even becoming part of the social hierarchy. At this point, few are held to account for the ramifications of their thoughtless posting. However, an environment where problems with content are outlined before AND after posting may make for a different result.
As for how to determine fake videos from real videos, it is currently not all that difficult to tell the difference (though the fakes are getting better). I like this advice from a real-life photo forensics expert.
Farid calls this an “information war” and an “arms race”. As video-faking technology becomes both better and more accessible, photo forensics experts like him have to work harder. This is why he doesn’t advise you go around trying to determine the amount of green or red in someone’s face when you think you spot a fake video online.
“You can’t do the forensic analysis. We shouldn’t ask people to do that, that’s absurd,” he says. He warns that the consequences of this could be people claiming real videos of politicians behaving badly are actually faked. His advice is simple: think more.
“Before you hit the like, share, retweet etcetera, just think for a minute. Where did this come from? Do I know what the source is? Does this actually make sense?
“Forget about the forensics, just think.”
That is my rule of thumb in this realm. Never take anything for granted, never assume. Because even a seemingly innocuous meme post about a sunken German U-boat in the great lakes may actually be a Russian submarine somewhere else entirely.
I mention this because I came VERY close to actually posting it myself around a year ago.
5. Lethal Autonomous Weapon Systems
AI researchers say we will be able to create lethal autonomous weapons systems in less than a decade. This could be in the form of small drones that are able to be deployed, and unlike current military drones, be able to make decisions about killing others without human approval.
For example, a recent video created by AI researchers showcases how small autonomous drones, Slaughterbots, could be used for killing targeted groups of people, i.e., genocide. Almost 4,000 AI/Robotics researchers have signed an open letter asking for a ban on offensive autonomous weapons.
On what basis should we ban these types of weapons, when individual countries would like to take advantage of them? If we do ban these, how can we ensure that it doesn’t drive research underground and lead to individuals creating these on their own?
This one also seems obvious. A bizarre combination of both war crime and potential suicide. Not to mention that it doesn’t help with the whole scare factor that is associated with AI.
Personally, I am for bans and treaties restricting the usage of this kind of technology. Like nuclear weapons, the deployment of these weapons could be EXTREMELY consequential, and possibly even a point of no return.
Having said that, such a ban may be hard to enforce on a global level. World War 2 showed us exactly how little treaties matter when the rubber meets the road. Also worth mentioning is how erratic superpowers contribute to the demand for research of such weapons, to begin with.
I would guess that no one is going to stop China or Russia on this path, should they choose to follow it. As for smaller rogue nations, invasions in recent years of supposedly denuclearized nations have trashed whatever credibility the west did have in the conversation. Leaving such states no option but mutually assured destruction.
Alright, maybe Iran or North Korea don’t have the capability to completely decimate a behemoth like the United States. None the less, you don’t have to if you make matchsticks of a major city or 3. Not to mention the blowback as treaties activate and other nations get involved in the war effort.
And now John Bolton is the National Security Advisor for President Trump. The Standard to which all hawks are compared is effectively at the helm.
Indeed, this may be a modern-day Wernher Von Braun moment in history. That is a time where it would be great to have a rational observer beyond all patriotic influence step in and say “WHAT THE HELL ARE YOU DOING?!”.
It’s certainly what came to my mind after I watched an older documentary about the evolution of weaponized rocketry in the United States. One that ended by praising the fact that we are now able to essentially hit the reset button on the biological clock of the planet.
6. Self-driving Cars
Google, Uber, Tesla and many others are joining this rapidly growing field, but many ethical questions remain unanswered.
For example, an Uber self-driving vehicle recently killed a pedestrian in March 2018. Even though there was a “safety driver” for emergencies, they weren’t fast enough to stop the car in time.
In the article, there is a screen capture from the vehicle just before impact.
The Guardian also has the released footage on its website. Both from the front of the vehicle, and the interior. You can see how suddenly the situation came up by the horrified look on the drivers face. This is going to be with her for the rest of her life.
Time for the critiques. Starting with “New footage of the crash that killed Elaine Herzberg raises fresh questions about why the self-driving car did not stop” as written by The Guardian.
Even without going further into the article, we can use the video itself for clues. First off, the most obvious, was that this happened at night. The second is the darkness of the scene around the roadway. I see multiple lanes and an exit just ahead. Which tells me that this was a high-speed roadway. In any case, the pedestrian appeared out of nowhere.
Questions are being raised about the equipment on the vehicle not responding to the threat, with it’s supposed ability to spot such threats hundreds of yards in advance. It is definitely worth looking into.
That said, however, I don’t think that the presence of the autonomous vehicle should be seen as anything more than coincidence. I suspect that the driver may have been texting, which is a problem. Putting too much faith in automated systems has doomed fly by wire aircraft before. None the less, even without a potential failure of the technology, I question if the collision would have been avoidable anyway.
Would I be seeing this case in the international media is it were just an ordinary car? Even if there was negligence on the part of the driver?
I highly doubt it. To the local population, it would be an unfortunate situation and traffic fatality. To the rest of us, it would be a statistic. Also, another anecdote as to why smart cars driving on smart roadways should be the way of the future.
No matter what the family or anyone else says, that pedestrian should NOT have been there. And most importantly of all, even if the driver may not have had time to see and avoid (to borrow an aviation term), the pedestrian had PLENTY of time. I counted no less than 3 lanes in the direction where they were walking from, which means that there should have been plenty of time to see the oncoming headlights. The fact that she seemingly didn’t notice them coming tells me that there was a distraction on her part as well. If not texting, then possibly even headphones.
There is a reason why I don’t listen to headphones when walking, nor do I text or talk on the phone when crossing a street. It only takes one second. And even if you are technically in the right, it’s hardly consolation if you’re under a vehicle, or beside it with broken ribs (if not worse).
Instead of turning this accident into an ethical dilemma, use it for the far more productive purpose that it educating people to PAY ATTENTION TO YOUR SURROUNDINGS. Your life may depend on it someday.
As self-driving cars are deployed more widely, who should be liable when accidents happen? Should it be the company that made the car, the engineer who made a mistake in the code, the operator who should’ve been watching?
What about the pedestrian (or another motorist) that should have seen the oncoming threat coming? Should Boeing and Airbus be liable when operators of their heavily automated products become too acquainted with the processes?
Should we not just take this on a case by case basis?
If a self-driving car is going too fast and has to choose between crashing into people or falling of a cliff, what should the car do? (this is a literal literal trolley problem)
The first question that comes to mind is why a self-driving car would be going too fast, to begin with. While a malfunction scenario is possible, the most likely culprit will almost certainly boil down to human intervention. I say this because humans are known to defeat the purpose of safety mechanisms for any number of frivolous reasons. It may as well be in our DNA.
I am speaking in the theoretical, at least at this point in history. But assuming the technology is going the way it was before now, I can see systems coming online that require less and less human input. Particularly if / when the equipment is also in communication with sensors and inputs from the roadway itself. I can envision this as being both far safer than any current roadway, and as being far more efficient. I explored this topic in some depth in a previous post.
In my conclusions, my projections are based on the trajectory of aircraft automation. The more automated that aviation has become, the safer the industry has been. I see no reason to see why the trajectory of autonomous vehicles should be any different.
Once self-driving cars are safer than the average human drivers (in the same proportion that average human drivers are safer than drunk drivers) should we make human-driving illegal?
I also explored this in my previous piece (linked above), but I will go into it a bit here.
This indeed will be a topic of concern moving forward. It’s said that the next 30 or 40 years may be a tricky transition period, as the automated machines mix with the human-driven machines on public roadways. Though machines are mostly predictable, the human often times is not, for a whole host of reasons. As a result of this, I have no doubt that there will be future collisions where these 2 factors come into conflict. However, I don’t doubt that the permeation of automation on roadways will eventually result in the reduction of this statistic over time.
It will mean that all levels of government will have to make some tough decisions. When we reach the point where human operators now pose an exceptional risk on primarily automated roadways, do we ban their presence altogether? Do we assign separated roadways to accommodate both types of traffic?
It will be interesting to see where this goes.
7. Privacy vs Surveillance
The ubiquitous presence of security cameras and facial recognition algorithms will create new ethical issues around surveillance. Very soon cameras will be able to find and track people on the streets. Before facial recognition, even ominpresent cameras allowed for privacy because it would be impossible to have humans watching all the footage all the time. With facial recognition, algorithms can look at large amounts of footage much faster.
For example, CCTV cameras are already starting to be used in China to monitor the location of citizens. Some police have even received facial-recognition glasses that can give them information in real time from someone they see on the street.
Should there be regulation against the usage of these technologies? Given that social change often begins as challenges to the status quo and civil disobedience, can a panopticon lead to a loss of liberty and social change?
The first thing I will say about the Panopticon comparison (after looking it up) is that we are already there. Not exactly in the literal sense (as in cameras equipped with facial recognition tracking everyone everywhere). More, in the data and meta-data generated by our everyday interaction with technology.
High-level intelligence agencies likely have access to much of the information within the deep web (think contents of email accounts, cloud servers, etc. Even the contents of your home computer would count, being it can access the internet but the contents are not indexed by search engines).
Note the difference between the deep and dark webs. The deep web is all about stored content. The dark web is where you find assassins (but is considered to be a small part of the dark web).
Along with what is stored on various servers of differing purposes, it’s now well known that various intelligence agencies (most notably the NSA) are also copying and storing literally petabytes of data as it flows through the many fiber-optic backbones that make up global online infrastructure. Even if your nation isn’t allowed to access this data by law, chances are good that nothing is stopping any of these other prying eyes from looking in.
In these days of encrypted communication as the new standard, however, I am unsure of how useful these dragnets will really be. What is more important is how the platforms one interacts with handle government requests for data.
Either way, many aspects of one’s existence can be mapped out by way of their digital breadcrumbs. Texts, emails, financial transactions etc. While financials and other unnoticed tracking systems (rewards programs!) can help track you physically, cellular technology brings a whole lot more accuracy in this regard. Particularly in the age of the wifi network.
One day, I brought up Google maps on my PC (I don’t remember why). I was surprised to see that it had my location not only pinpointed to the city but right down to the address. This alarmed me a little, knowing that this laptop has neither GPS nor cellular network access capabilities. The best the Google map should be able to do is my city, based on the location of my ISP (as dictated by my IP address).
I learned later however that when an Android (Google), Apple, or Microsoft mobile device connects to a new wifi network, the device makes note of its geographical location and sends the information to databases maintained by each company. So in theory, my network is now indexed in all 3 databases.
Given this, having cameras everywhere utilizing surveillance technology will not really add much to the situation. In many ways, most of us are already almost transparent if the desire for the information is there. They just have to put the pieces together.
Granted, I don’t live in a large metropolis (or a country like England) with a saturation of public and private surveillance. None the less, it’s still hard to imagine facial recognition tracking as being any worse than what already happens.
Indeed, it is a bit unnerving to know that most of your past steps could be easily tracked with such a system. But at the same time, similar results can be reached just by requesting logs of what towers your cell phone utilized within whatever time period they need. Or even better, the GPS data as very likely stored by Google, Apple or Microsoft.
Not having facial recognition built into the saturated CCTV systems of the world would indeed be the ideal option. The same to could be said for algorithms designed to piece together usage patterns from the intelligence dragnet stockpiled data. None the less, at least one of those bridges has already been crossed (likely both). So the best we can do is trying to keep a tight grasp on who can use the data.
Just as a court order is required for law enforcement to access your phone or internet records, so to should be the case for facial recognition data.
Philosophy with a deadline
Actual people are currently suffering from these technologies: being unfairly tracked, fired, jailed, and even killed by biased and inscrutable algorithms.
We need to find appropriate legislation for AI in these fields. However, we can’t legislate until society forms an opinion. We can’t have an opinion until we start having these ethical conversations and debates. Let’s do it. And let’s get into the habit of beginning to think about ethical implications at the same time we conceive of a new technology.
While I agree with the “let’s get into the habit of thinking about ethical implications at the same time we conceive of a new technology” part, I don’t think that society should have to form an opinion first.
Society has already formed opinions around both AI and social media. The view of AI tends to veer towards the negative, while the view of social media (at least previous to late 2017 / early 2018) tended to be more positive. Societal influences tend to come less from hard evidence and more from external drivers like pop culture and marketing. Since both can be used to influence a biased positive or negative response in the court of public opinion, the public stance is hardly helpful in this case.
When you don’t have a majority of media literate citizens, democracy is overrated.
So yes, keep checking the pushers of technological breakthroughs so as to ensure they are not getting ahead of all the details in their excitement (or haste) to promote such new innovations. But don’t make it a public spectacle.