UK Cyber Security Agency Embraces Political Correctness

Just when I thought I had seen it all. I wake up, turn on my tech-oriented podcast, and WHAM!

Them SJW’s strike again.

UK Cybersecurity Agency Drops ‘Blacklist’ and ‘Whitelist’ Terms Over Racial Stereotyping

‘If you’re thinking about getting in touch saying this is political correctness gone mad, don’t bother,’ the UK’s National Cyber Security Centre said in the announcement.

The words “blacklist” and “whitelist” get tossed around a lot in cybersecurity. But now a UK government agency has decided to retire the terminology due to the racial stereotyping the language can promote.

The UK’s National Cyber Security Centre is making the change after a customer pointed out how the words can needlessly perpetuate stigmas. “It’s fairly common to say whitelisting and blacklisting to describe desirable and undesirable things in cyber security,” wrote the NCSC’s head of advice and guidance Emma W. last week.

“However, there’s an issue with the terminology. It only makes sense if you equate white with ‘good, permitted, safe’ and black with ‘bad, dangerous, forbidden’,” she added. “There are some obvious problems with this. So in the name of helping to stamp out racism in cyber security, we will avoid this casually pejorative wording on our website in the future.”

To replace the terminology, NCSC has opted for the words “deny list” and “allow list,” which will now be used across its website and cybersecurity advisories. The language is not only clearer, but also more inclusive, the agency said.

“No, it’s not the biggest issue in the world — but to borrow a slogan from elsewhere: every little helps,” Emma W. added. “You may not see why this matters. If you’re not adversely affected by racial stereotyping yourself, then please count yourself lucky. For some of your colleagues (and potential future colleagues), this really is a change worth making.”

https://www.pcmag.com/news/uk-cybersecurity-agency-drops-blacklist-and-whitelist-terms-over-racial

 

Give me a break. Just because I maintain various black and white lists doesn’t mean I am racist, nor is it perpetuating a stereotype! And how about the hacking community . . . black hat and white hat are not racial designations!

Of all the problems we could be focusing on in 2020, THIS is the best we can do?! REALLY?!

 

* * *

 

What you have just read is one way to look at this new development. Pick a forum of your choice and you are likely to see this dynamic. Just another case of the weak making the world safer for themselves.

However, I am not going to take that view. Though I (like most others, it seems) have never really thought about the connotations behind such terms as Whitelist and Blacklist, I now see the harm. Even if we cast aside the role of privilege in situations like this, we are still left with the fact that the alternative is better than the status quo.

Allow / Deny List

White / Black List

It’s simple. It’s obvious from the perspective of users of any skill level. And it’s less reminiscent of human bias of past and present which just so happens to live on in the form of code.
The code is not inherently biased (code can’t be biased. It’s just an assortment of 1’s and 0’s!). The coder may not even be inherently (purposely) biased. However, we all tend to be a product of our environment. Since the internet was born in a nation that has a history of institutionalized racism towards African Americans, is it such a surprise that this bias would turn up in one of the nation’s greatest achievements?

The code ain’t biased. And there are arguably bigger problems one could be tackling. None the less, nothing beats more precise without the stigma.

As for the hacking community and it’s black, white and grey hats . . . is it really that big a deal to pick a new colour scheme?

Knowing how this segment operates . . . yes, it is. The info-sec community could unanimously embrace whatever change it wanted, but the “FUCK PC CULTURE!” crowd would keep wearing their black & white hats till the day they die.

Fine. Herding an entire cohort is problematic at the best of times (let alone one that prides itself on its rebelliousness and anarchism). Continue to identify by way of whatever hat  you want.

As for the rest of us, may I suggest Green Hats (Good) and Red hats (bad)? Grey hats can either stay the same OR may I suggest brown hats (it’s the colour one attains upon mixing red and green).
The organization Red Hat may take offence to this new scheme. As would so-called Red Hat hackers, who apparently either target Linux systems (please don’t) or slay black hats. Come to think of it, Green hat is taken as well.

https://www.techfunnel.com/information-technology/different-types-of-hackers/

I’m reminded of a Carlin segment.

“Everybody’s got a fucking hat!”

Different topic. But you get the point. Are the hats REALLY necessary (Black, white or otherwise)? Or is there a better way that is less arbitrary?

* * *

It may all seem silly to people. Making a big deal out of a non-issue in the world of technology (only the latest field to be targeted by this fad of PC culture). While one can’t argue that the future of most industries will not be negatively affected by institutionalized bias (it hasn’t made much difference up to now, has it?), the same can not be said for the technology industry. It all comes down to artificial intelligence.

Before the modern era, most of the code that ran on much of our devices could be said to be primarily dumb. By that, I mean that no matter what little quirks humans reflected into the 1’s and 0’s, there was relatively little consequence. For example, black and white lists. However, in an era where both training and embracing the use of Artificial Intelligence is on the rise in every single area of life, there is no longer room for unaccounted human bias. Because biases in Artificial Intelligence algorithms don’t just sit on millions of desktops and servers for decades without consequence. These biases actively alter the decision-making abilities of these algorithms. And every time one of these algorithms goes biased in view of the public, the entirety of the sector takes a reputational hit on account of the programmer’s error.

By now, we have all likely heard about cases of racist, sexist, and otherwise biased artificial intelligence instances. All of which only adds to the bad reputation that the technology has earned in the public eye thanks to Hollywood, opinions from people like Elon Musk, and just the weird factor of it all. We just don’t like being the second biggest brain in the room.

Or more pertinently, we tend not to trust black box AI. Algorithms that are fed a given set of data that then return a result without providing any means of tracking exactly what factors lead to that conclusion. This scares people. Given the reputation of the technology so far, I don’t blame the public for being weary.

Despite this bias problem, however, I still think that artificial intelligence has a bright future in many areas of human existence. In fact, when (I don’t think it is a matter of IF, should the industry take this problem seriously!) we figure out how to weed out the bias-inducing factors that are clouding the current day outputs, I think artificial intelligence has an excellent chance of acting the part of a neutral arbitrator in places where decisions based on human biases (most of which aren’t accounted for) are notoriously prevalent. For example, in the judicial system, and even in Human Resources departments worldwide(One, Two).

Speaking of the judicial system, the CBS show Bull (despite the problems associated with the actor behind its protagonist. Some even argue the same of CBS, in general <One, Two> ) is a brilliant example of human bias in action. The whole point of Jason Bull’s career (Trial Science) and business is essentially finding a way to manipulate the humans of the judicial system into finding in the interests of his clients. Whilst the whole concept may seem far fetched on the surface, it really isn’t far from the reality of the situation. Particularly if you are a member or a notoriously targeted minority in the society you live in.

Though many people tend to fear black box AI, I am far more untrusting of the human brain. Because there is no more inaccessible black box than the one that resides between the ears of any person. A box that could be motivated by biases and annoyances ranging from the racial, to the mundane (“I’m so bored. Is it lunchtime yet?” or “I need to pee so bad!”).
When an individual or group of humans (such as a jury, judge or human resources manager) makes a decision about a given individual, there is no way to gain any insight as to what drove that decision-making process. At best, you are forced to take the person’s word for it that the decision was fair (despite the fact that many humans are unaware of how systemic biases may be affecting these decisions). In reality, there is often no way to even question the individuals making the decisions, so you have to assume on faith that they are acting in your best interests.

Knowing how humans are, I have no such faith in the human species. At this point in time, such an opinion doesn’t mean much since humans are still in charge of making many of the world’s crucial decisions. However, given the choice between some kind of fair artificial intelligence algorithm and a human brain, I lean towards trusting the AI.

Of course, not yet. Judging from the biased outputs of many of these systems, it looks to my untrained eye that many programmers have yet to accept the concept that is “Garbage in, garbage out”. If the data that you are feeding your algorithm is riddled with biases (be they be apparent, or biases of omission), the end result is going to be less than desirable. Predictable, even.
It reminds me of the process of raising a child. Children are not born racist, sexist or otherwise pre-equipped with any matter of human biases. This is primarily learned behaviour. And since most (all?) aspects of human culture tend to be saturated in the unexamined biases of primitive societies which were carried forward by future generations for primarily irrational reasons (tradition), it’s almost impossible NOT to be born and raised without accepting some form of bias.

As the annoying Tik Tok that I have quoted a few thousand times in the past 2 months states:

There is a reason why that saying annoys many philosophers critical of Stoicism. In fact, there is a reason why stoicism annoys me. Being I work and have worked in corporate environments for all my life, the mantra has always been essentially “Just deal with it, or there is the door!”. It makes for an easy to manage workforce when all of the cogs just mindlessly obey their orders. However, it is an inherently shortsighted management style since no one is more well-positioned to spot problems and inefficiencies than ground-level employees. If the culture of that company dictates that employees just deal with it, the result could enable longstanding (and often silly) inefficiencies that may well be trivial to correct.

As such, what LOOKS to be a well-oiled machine may well be operating below the level it could truly be capable of. Whilst such may not matter in times where business is good and money is plentiful, the whole situation can change if the financial situation changes.

While not directly applicable to the conversation that is artificial intelligence, it still comes together. Unlike the pipe dream that is correcting all the biases of even ONE human, we can do so with the artificial intelligence algorithms we create. Whilst this part may well be more of a challenge, I don’t think that all AI algorithms necessarily have to be black boxes. In fact, there is good reason to promote transparency in decision-making processes.
Though this will be crucial in many areas, nowhere more so than in the realm of the judicial system. If an algorithm is to be trusted handing down judgements on a seemingly automated basis, there should be a way for the recipients of these sentences to know how the decision came to be. And of course, a process in which it can be appealed.

As such, I have my doubts that many appeals court or supreme court judges are going to be automated away anytime soon. They may, in fact, get a whole lot busier in the first decade or 2 of the transition. At least until public jitters and mistrust over the system are calmed.

 

* * *

It would seem that we are miles from where we started (in terms of this post). After all, how do White and Black lists in ANY way affect the future of Artificial Intelligence? Or to reference an older terminology controversy originating in the tech community, how does merely naming a concept the Master / Slave network architecture cause harm?

First of all, an explanation.

Master/slave is a model of communication for hardware devices where one device has a unidirectional control over one or more devices. This is often used in the electronic hardware space where one device acts as the controller, whereas the other devices are the ones being controlled. In short, one is the master and the others are slaves to be controlled by the master. The most common example of this is the master/slave configuration of IDE disk drives attached on the same cable, where the master is the primary drive and the slave is the secondary drive.

https://www.techopedia.com/definition/2235/masterslave

And second, privilege tends to play a big role in whether or not the wording is offensive. Someone that grew up and otherwise lives outside the context that is life as an African American in America (and really, anywhere) would predictably find little harm in what they interpret as unrelated markers in an entirely unrelated context. For people that have experienced that background of prejudice, however, these unanalyzed tags represent yet another example of systemic bias. That there is a contingent of programmers that vocally support such language only serves to shore up this conclusion.

Not all of the industry is unwilling to make changes in the name of stemming old biases, however. In around the same timeframe, both Google and Python (one of the world’s most popular programming languages) committed to purging the antiquated and offensive terms from their entire code base. Python replaced Slaves with workers and master with Parent Process. In the context of a network, one just has to drop the word process.

Thus proving once more that “this is how it has always been!” doesn’t mean that this is how it always HAS to be.

Of course, again, none of this has anything to do with artificial intelligence at first glance. However, it can serve as a nice jumping-off point since scrutinizing our existing dumb code base for these unnoticed (and this, unevaluated) biases can help prepare us for the care that is required in creating and maintaining the artificial intelligence processes of the future. Or, at the very least, it can serve as an excellent tool for determining which programmers embrace the correct frame of mind to tackle such a finicky project.

 

https://getpocket.com/explore/item/how-to-think-about-implicit-bias?utm_source=pocket-newtab

Though I didn’t utilize this article in writing the piece, it was recommended to me in the process of gathering related materials. It’s an interesting read.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.