“Unboxing Google’s 7 New Principles Of Artificial Intelligence” – (aitrends)

Today, I am going to look into Google’s recent release of it’s 7 new principals of artificial intelligence. Though the release was made at the beginning of July, life happens, so I haven’t been able to get around to it until now.

https://aitrends.com/ethics-and-social-issues/unboxing-googles-7-new-principles-of-artificial-intelligence/

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses.

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

Right off the bat, were into some interesting stuff. An assistant that can appear to do all of your phone call related chores FOR you.

On one hand, I can understand the ethical implications. Without confirming the nature of the caller, it could very well be seen as a form of fraud. It’s seen as such already when a person contacts a service provider on behalf of another person without making that part clear (even if they authorize the action!). Indeed, most of the time, no one on the other end will likely even notice. But you never know.

When it comes to disguising the digital nature of the voice of such an assistant, I don’t see any issue with this. While it could be seen as deceptive, I can also see many businesses hanging up on callers that come across as being too robotic. Consider, the first pizza ever ordered by a robot.

Okay, not quite. We are leaps and bounds ahead of that voice in terms of, well, sounding human. None the less, there is still an unmistakably automated feel to such digital assistants as Siri, Alexa, and Cortana.

In this case, I don’t think that Google (nor any other future developer or distributor of such technology) has to worry about any ethical issues surrounding this. Simply because it is the onus of the user to ensure the proper use of the product or service (to paraphrase every TOS agreement ever)

One big problem I see coming with the advent of this technology is, the art of deception of the worst kind is going to get a whole lot easier. One example that comes to mind are those OBVIOUSLY computer narrated voices belching out all manner of fake news to the youtube community. Now the fakes are fairly easy for the wise to pick up on because they haven’t quite learned the nuances of the English language (then again, have I?). In the future, this is likely to change drastically.
Another example of a problem posed by this technology would be in telephone scamming. Phishing scams originating in the third world are currently often hindered by the language barrier. It takes a lot of study to master enough English to fool most in English speaking nations. Enter this technology, that that barrier is gone.

And on the flip side of the coin, anything that is intelligent enough to make a call on your behalf can presumably also be programmed in the reverse. To take calls. Which would effectively eliminate the need for a good 95% of the call center industry. Though some issues may need to be dealt with by a human, most common sales, billing, or tech support problems can likely be dealt with autonomously.

So ends that career goal.

None the less, I could see myself having a use for such technology. I hate talking on the phone with strangers, even for a short time. To have the need for that eliminated would be VERY convenient. What can be fetched by a tap and a click IS, so eliminating what’s left . . . I’m in millennial heaven.

You heard it here first . . .

Millenials killed THE ECONOMY!

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Time to ruffle some progressive feathers.

In interpreting this, I am very curious of what is meant by the word improve. What does it mean to improve the targeting of drone strikes? Improve the aiming accuracy of the weaponry? Or improve the quality of the targets (more actual terrorist hideouts, and fewer family homes)?

This has all become, very confusing to me. One could even say that I am speaking out of both sides of my mouth.
On one hand, when I think of this topic, my head starts spitting out common deliberately dehumanizing war language like terrorists, combatants, or the enemy. Yet, here I am, pondering if improved drone strikes are a good thing.

I suppose that it largely depends on where your interests are aligned. If you are more aligned nationalistically than humanisticly than this question is legitimate. If you work for or are a shareholder of a defense contractor, then this question is legitimate. Interestingly, this could include me, being a paying member of both a private and public pension plans (pension funds are generally invested in the market).

Even the use of drones alone COULD be seen as cowardly. On the other hand, that would entail that letting loose the troops onto the battlefields like the great wars of the past, would be the less cowardly approach. It is less cowardly for the death ratio to be more equal.
Such an equation would likely be completely asinine to most. The obvious answer is the method with the least bloodshed (at least for our team).Therefore, BOMBS AWAY!” from a control room somewhere in the desert.

For most, it likely boils down to a matter of if we HAVE to. If we HAVE to go to war, then this is the best way possible. Which then leads you to to the obvious question that is “Did we have to go to war?”. Though the answers are rarely clear, they almost always end up leaning towards the No side. And generally, the public never find this out until after the fact. Whoops!

The Google staff (as have other employee’s in silicon valley, no doubt) have made their stance perfectly clear. No warfare R & D, PERIOD. While the stance is enviable, I can’t help but also think that it comes off as naive. I won’t disagree that the humanistic position would not be to enable the current or future endeavors of the military-industrial complex (of which they are now a part of, unfortunately). But even if we take the humanist stance, many bad actors the world over have no such reservations.
Though the public is worried about a menace crossing the border disguised as a refugee, the REAL menace sits in a computer lab. Without leaving the comfort of a chair, they can cause more chaos and damage than one could even dream of.

The next war is going to be waged in cyberspace. And at the moment, a HUGE majority of the infrastructure we rely upon for life itself is in some stage of insecurity ranging from wide open to “Password:123456”.
If there is anyone who is in a good position to prepare for this new terrain of action, it’s the tech industry.

On one hand, as someone who leans in the direction of humanism, war is nonsense and the epitome of a lack of logic. But on the other hand, if there is one thing that our species has perfected, it’s the art of taking each other out.

I suspect this will be our undoing. If it’s AI gone bad, I will be very surprised. I suspect it will be either mutually assured destruction gone real, or climate change gone wild. Which I suppose is its own form of mutually assured destruction.

I need a beer.

Part of this exploration was based around a segment of the September 28, 2018 episode of Real Time where Bill has a conversation about the close relationship between Astrophysicists and the military (starts at 31:57). The man’s anti-philosophical views annoyed me when I learned of them 3 years ago. And it seems that he has culminated into a walking example of what you get when you put the philosophy textbooks out with the garbage.

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI.

When it comes to this sort of thing, I am not so much scared as I am nervous. Nervous of numerous entities (most of them private for profits, and therefore not obligated to share data) all working on this independently, and having to self-police. This was how the internet was allowed to develop, and that has not necessarily been a good thing. I need go no further than the 2016 election to showcase what can happen when a handful of entities has far to much influence on, say, setting the mood for an entire population. It’s not exactly mind control as dictated by Alex Jones, but for the purpose of messing with the internal sovereignty of nations, the technology is perfectly suitable.

Yet another thing that annoys me about those who think they are red-pilled because they can see a conspiracy around every corner.

I always hear about mind control and the mainstream media, even though the traditional mainstream media has shrinking influence with each passing year. It’s being replaced by preference tailored social media platforms that don’t just serve up what you love, but also often (and unknowingly) paint a false image of how the world looks. While facts and statistics say one thing, my Youtube suggestions and overall filter bubbles say another.

It’s not psi-ops and it doesn’t involve chemtrails, but it’s just as scary. Considering that most of the people developing this influential technology also don’t fully grasp what they have developed.

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

Truth be told, I am not sure I understand this one (at least the explanation). It seems like the argument is that the convenience of it all will help push people out of their comfort zone. But I am a bit perplexed as to what that entails.
Their comfort zone, as in their hesitation in allowing an advanced algorithm to take such a prominent role in their life? Or their comfort zone as in, helping to create opportunities for new interactions and experiences?

In the case of the former, it makes perfect sense. One need only look at the 10 deep line at the human run checkout and the zero deep line at the self-checkout to understand this hesitation.
As for the later, most would be likely to notice a trend in the opposite direction. An introvert’s dream could be seen as an extroverts worst nightmare. Granted, many of the people making comments (at least in my life) about how technology isolates the kids tend to be annoyingly pushy extroverts that see that way of being as the norm. Which can be annoying, in general.

Either way, I suspect that this is another case of the onus being on the user to define their own destiny. Granted, that is not always easy if the designers of this technology don’t fully understand what they are introducing to the marketplace.

If this proves anything, it’s that this technology HAS to have regulatory supervision from entities who’s wellbeing (be it reputation or currency wise) is not tied into the success or failure of the project. Time and time again, we have seen that when allowed to self-police, private for-profit entities are willing to bury information that raises concerns about profitable enterprises. In a nutshell, libertarianism doesn’t work.

In fact, with the way much of this new technology is often hijacking and otherwise finding ways to interact with us via out psychological flaws, it would be beneficial to mandate long-term real-world testing of these technologies. In the same ways that newer drugs must undergo trials before they can be released on the market.

Indeed, the industry will do all they can to fight this, because it will effectively bring the process of innovation to a standstill. But at the same time, most of the worst offenders for manipulating the psyche of their user base do it strictly because the attention economy is so cut throat.
Thus, would this be really stifling technology? Or would it just be forcing the cheaters to stop placing their own self-interests above their users?

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

The author illustrates a good point here, though I am unsure if they realize that they answered their own question with their explanation.
Machines are a blank slate. Not unlike children growing up to eventually become adults, they will be influenced by the data that they are presented. If they are exposed to only neutral data, they are likely less prone to coming to biased conclusions.

So far, almost all of the stories that I have come across about AI going racist, sexist, etc can be pinpointed to the data stream that it is based on. Since we understand that domineering ideologies of parents tend to also become reflected in their children, this finding should be fairly obvious. And unlike how difficult it is to reverse these biases in humans, AI can presumably be shut down and reprogrammed. A mistake that can be corrected.

Which highlights another interesting thing about this line of study. It forces one to seriously consider things like unconscious human bias. As opposed to the common anti-SJW faux-intellectual stance that is:

“Are you serious?! Sexism without being overtly sexist?! Liberal colleges are turning everyone into snowflakes!”

But then again, what is a filter bubble good for if not excluding nuance.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

This is good, but coming from a private for-profit entity, it really means nothing. One has to have faith (hello Apistevists!) that Alphabet / Google won’t bury any negative finding made with the technology, particularly if it is found to be profitable. A responsibility that I would entrust to no human with billions of dollars of revenue at stake.

Safety should always be the first consideration when designing ANYTHING. But we know how this plays out when an industry is allowed free rein.
In some cases, airplane cargo doors fly off or fuel tanks puncture and catch fire and people die. And in others, sovereign national elections get hijacked and culminate in a candidate in which many question the legitimacy of.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

Indeed. But we must not forget the reverse. People must be accountable for what they do with their AI tools.

Maybe I am playing the part of captain obvious. None the less, it has to be said. No one blames the manufacturer of bolt cutters if one of its customers uses them to cut a bike lock.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

I used to get quite annoyed with people that were seemingly SHOCKED about how various platforms used their data, but ignorant to the fact that they themselves volunteered the lions share of it openly.
Data protection has always been on my radar, particularly in terms of what I openly share with the world at large. Over the years, I have taken control of my online past, removing most breadcrumbs left over from my childhood and teenage years from search queries. However, I understand that taking control within one platform ALONE can be a daunting task. Even for those that choose to review these things in Facebook, it’s certainly not easy.

There is an onus on both parties.

Users themselves should, in fact, be more informed about what they are divulging (and to who) if they are truly privacy-conscious. Which makes me think of another question . . . what is the age of consent for privacy disclosure?

Facebook defaults this age to 18, though it’s easy to game (my own family has members that allowed their kids to join at 14 or 15!). Parents allowing this is one thing, but consider the new parent who constantly uploads and shares photographs of their children. Since many people don’t bother (worry about?) their privacy settings, these photos are often in the public domain. Thus, by the time the child reaches a stage when they can make a decision on whether or not they agree with this use of their data, it’s too late.

Most children (and later, adults) will never think twice about this, but for those who do, what is the recourse?
Asking the parent to take them out of the public domain is an option. But consider the issue if the horse is already out of the barn.

One of my cousins (or one of their friends) once posted a picture of themselves on some social media site drinking a whole lot of alcohol (not sure if it was staged or not). Years later, they would come across this image on a website labeled “DAMN, they can drink!”.
After the admin was contacted, they agreed to take down the image for my cousin. But in reality, they didn’t have to. It was in the public domain, to begin with, so it’s up for grabs.

How would this play out if the image was of a young child or baby of whom was too young to consent to waive their right to privacy, and the person putting the photo in the public domain was a parent/ guardian or another family member?

I have taken to highlighting this seemingly minuscule possibility of issue recently because it may someday become an issue. Maybe one that the criminal justice systems of the world will have to try and figure out how to deal with. And without any planning as to how that will play out, the end result is almost certain to be bad. Just as it is in many cases where judges and politicians have been thrust the responsibility of blindly legislating shiny new technological innovation.

To conclude, privacy is a 2-way street. People ought to give the issue more attention than they give to a post that they scroll past because future events could depend on it. But at the same time, platforms REALLY need to be more forward in terms of exactly WHAT they are collecting, and how they are using this data. Making changes to these settings should also be made into a task of relative ease.

But first and foremost, the key to this is education. Though we teach the basics of how to operate technology in schools, most of the exposure to the main aspects of this technology (interaction) is self-taught. People learn how to use Facebook, Snapchat and MMS services on their phone, but they often have little guidance on what NOT to do.

What pictures NOT to send in the spur of the moment. How not to behave in a given context. Behaviors with consequences ranging from regret to dealing with law enforcement.

While Artificial Intelligence does, in fact, give us a lot to think about and plan for, it is important to note that the same goes for many technologies available today. Compared to what AI is predicted to become, this tech is often seen as less intelligent than it is mechanical. None the less, modern technology plays an ever-growing role in the day to day lives of connected citizens of the world of all ages and demographics. And as internet speeds keep increasing and high-speed broadband keeps getting more accessible (particularly in rural areas of the first world, and in the global south), only more people will join the cloud. If not adequately prepared for the experience that follows, the result could be VERY interesting. For example, Fake news tends to mean ignorance for most westerners, but in the right cultural context, it could entail death and genocide. In fact, in some nations, this is no longer theoretical. People HAVE died because of viral and inciteful memes propagated on various social media platforms.

Priorities.

Before we even begin to ponder the ramifications of what does not yet exist, one has to get our ducks in a row in terms of our current technological context. It will NOT be easy and will involve partnerships of surprising bed fellows. But it will also help smooth the transition into an increasingly AI dominated future.

Like it or not, it is coming.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.