Welcome to the Utopia Forums! Register a new account
The current time is Tue Dec 11 11:31:22 PST 2018

Utopia Talk / Politics / Men are better employees
Sam Adams
Member
Wed Oct 10 09:50:48
Amazon creates machine learning algorithm to correlate resumes with employee success.

Computer, after crunching the numbers, decides women less productive in the workplace and recommends hiring fewer of them.

Sjws throw tantrum.

Seb and duckhat cry.

Amazon shuts down program.

Seb and duckhat celebrate.
Sam Adams
Member
Wed Oct 10 09:51:17
http://www...as-against-women-idUSKCN1MK08G
hood
Member
Wed Oct 10 10:36:03
After reading some of the article, it sounds like it was just a shitty algorithm.
Sam Adams
Member
Wed Oct 10 10:40:08
Damnit hood.

Lol.
Rugian
Member
Wed Oct 10 10:55:17
How can it be a shitty algorithm if it came to the correct conclusion?
hood
Member
Wed Oct 10 11:07:20
That I can't answer, rugian. But it appeared to be spitting out completely unqualified people, not just displaying a male bias.

Plenty of people/things are accidentally right.
Seb
Member
Wed Oct 10 11:31:59
If you feed a genetic algorithm biased data then it will learn the bias.

Algorithms, especially genetic ones, are not objective by default.

How is it supposedly technically savvy people like Sam still don't understand that.
Seb
Member
Wed Oct 10 11:48:53
Reading deeper coverage of this, it proves the opposite. The defects in the algorithm pretty much prove with statistical significance and high degree of confidence that the training data was biased against specific terms associated with women.

It actually proves that Amazon's hiring managers are biased against women otherwise those terms wouldn't have been weighted as valid over terms associated.

TL;Dr the only way the genetic algorithm could fit to the training data and match the actual decisions was to lock on to indicators of the applicants gender; not just terms associated with performance.

Sam fail.
Pillz
Member
Wed Oct 10 16:19:34
Reality is male biased.

Whatever will Seb do
werewolf dictator
Member
Wed Oct 10 18:04:59
"It penalized resumes that included the word “women’s,” as in “women’s chess club captain.”"

"i like to segregate myself by sex in weaker smaller division in serious pursuit of nonphysical autistic game"

no surprise ai saw correlation of such people to data of disorganization and lack of productivity in company divisions
Seb
Member
Wed Oct 10 18:36:16
That would be amazing given the algorithm had no access to that data!
werewolf dictator
Member
Wed Oct 10 18:52:01
since it was serious project it is safe to assume they fed it all the objective data they could find..

and ai results said ceteris parabus.. average women stunk at work.. just like they do in chess
Sam Adams
Member
Wed Oct 10 18:52:50
Lol seb is pissed
werewolf dictator
Member
Wed Oct 10 22:02:14
men's versus women's almost objective chess elo ratings [men's probably inflated by ~7 points] after 750 games.. when ratings tend to peak

http://en....news/2014/topical/howard06.png

men striving to be nerds seem better at it than women.. so if hiring for tech.. ai likely had correct pattern recognition.. and grievance studies diversicrats are wrong attacking it..


[male-female elo gap increases in country's with higher female participation rates too]
hood
Member
Wed Oct 10 22:18:19
Gotta love the people arguing about an algorithm that was legitimately suggesting completely unqualified people as top talent.

This really had nothing to do with men/women. It was just a really fucking bad AI.
werewolf dictator
Member
Wed Oct 10 22:19:42
unqualified male chess player likely gets good after 750 games..
hood
Member
Wed Oct 10 22:25:05
Completely unrelated to the posted article. Weird that putin has you working this angle.
Wrath of Orion
Member
Wed Oct 10 22:28:09
http://hum.../discussionpapers/EDP-1605.pdf
werewolf dictator
Member
Wed Oct 10 22:48:25
http://pdf...cd0ebc5e0404eee345c6def3de.pdf

>100x game sample size.. opposite finding
Wrath of Orion
Member
Wed Oct 10 22:54:51
I'm aware of that piece. There are a number of significant differences to the one I posted (sample size is not an interesting one in this case).
CrownRoyal
Member
Wed Oct 10 23:13:45
“hood
Member Wed Oct 10 22:25:05
Completely unrelated to the posted article. Weird that putin has you working this angle.”

Western retards must be riled up. This is how they deal with their colleagues in putinland.

http://www.rt.com/politics/440447-russian-male-state-leader/
Nimatzo
iChihuaha
Thu Oct 11 05:47:33
The word ”bias” is thrown around these days as inherently wrong and negative. It is assumed preferences for or against are arbitrary and/or irrational. My bias against young men in hoodies isn’t arbitrary or irrational in the context I live in. How many bad interactions do I need with gang members to die or be permanantly diminished? One.

Am I profiling a lot of innocent people? Yes, but I don’t have the time or apetite for risk to get to know everyone standing in an empty park or dark street at night. So I avoid them.

This isn’t perfect, it scales poorly, but it is heuristic program for risk assessment that works. Bias, doesn’t mean always, mostly or inherently wrong. A specific bias must be viewed together with the risk and consequence of being wrong.

Employers pay dearly for hiring the wrong person. Specially in areas where it takes a long time to get trained and where the operation requires highly educated people. You invest time and then dingleberry quits and before he quits he has managed to upset a bunch of people and ruin a bunch of things.

That men are more productive, isn’t news. They work longer hours and specifically for tech, forgo human contact more readily, even with their family. They also commit more suicides, and die earlier. In general, the pareto principle applies here, some relativly smaller percentage of worker are responsible for the majority of output, even in creativity.

So the question is what kind of society do you want to live in? We already reward people with high output, this is a market dynamic. Should we now only hire 15th percentile workers? How much more should they be rewarded? What is the long term viability of the progression away from the mean? There societal questions bigger than maximizing quarterly profits at stake.
Seb
Member
Thu Oct 11 07:02:35
Nim:

There are literally laws against profiling recruitment on gender and race.
Seb
Member
Thu Oct 11 07:36:58
Nim:

"work longer hours and specifically for tech, forgo human contact more readily"

We literally discussed on the other thread how this is actually a really bad pattern in tech. To make good products you need to focus on the user: empathy, communication, ability to work with people. Dilbertesque codemonkeys working in cubicles is why we have so many giant IT failures.

And this is what code of conduct lady meant in part of her critique of "meritocracy" as practiced in tech culture which focuses on number of lines added/remoced. It's not merit (or even strictly productivity if what you care about is producing quality product), it's utility. Rewarding utility with leeway to be shitty to others leaves you in a world of pain.

I also think your profiling argument is insane.

Replace "male" and "female" with "went to a really posh school" - everyone would recognise that as a very self destructive hiring practice as well as being undesirable socially.


Nimatzo
iChihuaha
Thu Oct 11 09:25:25
>>There are literally laws against profiling recruitment on gender and race.<<

Not for me in my daily life though. My point is that we all do this. Even you seb, you look at people and you judge them based on how they look and behave and what your past experience is with people who look and behave as such.

But you can program an algorithm that will reliably single out more men than women without intention, based on criteria connected to behavior. This wouldn't be illegal even under those terms. It would just be an output based on certain traits, which you may find more of in a specific group/category.

>>We literally discussed on the other thread how this is actually a really bad pattern in tech.<<

And I literally asked relevant question in connection to this description (i.e not normative). So there are these fact about the present situation and then there is that place somewhere in the future where one wants to be.

>>Replace "male" and "female"<<

That was actually my first thought of example. I think it would be smart and rational if a woman avoided a group of men in a dark place or got nervous if she was being followed by an innocent stranger about his business. I chose a life and death scenario to illustrate that this is inherent and but not inherently bad in and of itself, it is more complicated.
Nimatzo
iChihuaha
Thu Oct 11 09:46:15
”leeway to be shitty”

Do you see how the treshold and criteria for what constitues ”shitty” is different for different people? And I mean well beyond ID politics, that people of the same sex, race and religion will not react the same to the same ”slight” or even view the same things as slights?

I can’t force myself to be offended or force myself to think that you should be offended, when I wouldn’t/haven’t. Infact in such a case I feel obligated to explain why you shouldn’t be offended.
Nimatzo
iChihuaha
Thu Oct 11 09:59:43
And we did not discuss hiring practices. You may have mentioned something about it, nothing I responded to. We talked about processes, I mentioned low competency. You referenced CoC of linux. Something I stayed clear of, all I said was that systems of merit must be reviewed like other systems. What is and isn’t or should be a merit in tech, I have not given any opinions about. But clearly the relevant skills and proficiency is on top of the list. You can be a total asshole and extremly useful confined in a basement. A useless but extremly kind person is still useless. I mean that group will grow and that is an issue much larger than CoCs and hiring practices.
Nimatzo
iChihuaha
Thu Oct 11 10:44:56
And that last line, I mean that every year even more people are made redundant by sql code :) or other forms of automation. To the extent that there are other low skill jobs, all is good. Now I could be wrong, the bar for entry is also lowered by tech in other areas making it easier to do things that required skills and resources few have. But I am not sure that equation balances out. This is partly automation, but also a factor of stricter selection criteria, last year you needed high school diploma, next year a Bsc and so on. The short answer is that not everyone have in their god given toolbox the ability to get those degrees or produce such results. None of us is immune to being made redundant or too stupid.
Seb
Member
Thu Oct 11 11:46:54
Nim:

I'm not sure what your point is. Yes people regularly make judgements via profiling. They are often unaware they are even doing so (though you've previously rejected this).

But a. You are not allowed to do this in recruitment in respect of race, sex, religion, gender and b. This is exactly what bias is. Bias explicitly does not mean acting with deliberate malice.

I think you may not have understood how this algo was developed. It's a genetic algorithm - trained on CVs plus previous recruitment sift decisions. It's not that Amazon developed an algo by deliberately looking at factors that correlate more with men producing a biased outcome.

The model learned to penalise certain phrases in the CVs in order to match the historic sift decisions.

Thats incredibly suggestive of bias in the manual recruitment process:

The number of female applicants was already small, so the training data for career related keywords relevant to competency, skill and experience should be dominant here.

After investigating, they found the GA was looking for words and phrases that correlated with femininity and adding negative weights.

So the most accurate description here is that the algorithm learned to be biased specifically against women from Amazons historic hiring decisions.

Ok, but could this be accidental, in the sense that the algorithm is picking up gender indicators as they are easier to detect than whatever relevant indicators the original sifters used that caused then to reject the female cvs?

Highly unlikely no.
For a GA to have to pick up keywords correlating to femininity and add negative weights in order to fit the data, rather than reproducing the fit by weighting terms very strongly suggests that the sift results in the training data can only be explained by the applicants gender, not by any other relevant factors.

I.e. Amazon's manual CV sift team has been rejecting lots of womens CV largely because they are from women (whether they are aware of it or not).

So what this shows is simply that Amazon's previous hiring decisions were very, very likely to have been biased towards women (as in, specifically their gender, not coincidentally), and the Algos explicit bias is simply because the training data taught it to be so.

The reference to they other thread is not about hiring practices but what you are implying makes a good software developer.

Sam Adams
Member
Thu Oct 11 11:53:51
What if the bias is based on fact seb? Should i ignore these facts because your feelings are hurt?
Dukhat
Member
Thu Oct 11 14:30:09
These men's rights threads are so funny.
Seb
Member
Thu Oct 11 16:04:40
Sam Adams:

If the bias was based on fact, then the algo would be picking up on objective, relevant criteria - not, as they discovered when they looked under the hood, random word fragments that correlate with gender and nothing else.

The fact that the algo evolved to negative weightings on extraneous words correlated with gender to be able to fit to the rejections of womens CVs in the training data - rather than being able to match the training data using the weights for other words and fragments associated with success and failure in the much larger number of mens CVs demonstrates with a high degree of confidence precision that there was no "facts" here at all.

Most likely, what actually happens is the contextual weighting that is often seen and reported.

Take Nim's idea of what a good programmer is: very focused on producinng code with a github PR history showing him working all hours 7 days a week, uncaring of personal relationships and willing to work 24/7.

So when a recruiter sees this they think: He is smart, ambitious, hard working, driven and knows his stuff.

When a women displays these tendencies, this differs a lot from their expectation of what a women should be like and they think:
She's a know it all, a poor team player, and frankly obsessive. Not a good fit.

This framing type cognitive bias* is well known in all sorts of other areas, so it would be crazy to think you wouldn't see something similar.



*cognative bias: like confirmation bias, risk aversion etc.
Sam Adams
Member
Thu Oct 11 18:04:20
But what if there was some strong correlation, not necessarily this one, where the facts and political correctness are in disagreement. Which do you choose?
McKobb
Member
Thu Oct 11 19:45:04
I think Seb is a FTBG guy.
Pillz
Member
Thu Oct 11 21:23:52
Fuck that bitch guy?
Nimatzo
iChihuaha
Fri Oct 12 01:09:32
>>They are often unaware they are even doing so (though you've previously rejected this).<<

I rejected specific example, where there is no evidence that this effects behavior or is even relevant in the longer term. If a hoodie in the dark turns out to be harmless, we adjust. Yes we shouldn’t read to much into profiling and appreciate that is is a more complex phenomena. I am saying this now as well.

”The fact that the algo evolved”

There the same result when human beings do it. There several blind test that show the same results, when gender is known, women do better. This algo is hilarious since the aim was the opposite. This is the results one expects when the people have no fucking clue :) I predicted these types of results.

”Take Nim's idea of what a good programmer is”

How quickly the quality deteriorates. ”Good” is term loaded with subjective values. I spoke of output and productivity, yes someone willing to work like a robot will put out alot. This is objectiv fact and a factor of time. Given the same qualifications the person willing to work more will do more. Uncontrovertial simple math. Now we may ask is ouput everything? Is the quartly cycle too dominant etc. Is this something we should want? The market and our world rewards this alot, we pay extra for over time to factory workers. Every unit of time you sell, gives you more money. It isn’t uncommon that highly skilled people are working as consultants, with registered companies and charging by the hour, free from union regulation on work hours and oversight. This os true even in Sweden. These issues are much larger than this fart in space that is this algo or the CoC of linux. There are pros and cons.

Anyway this is a fruitless discussion, I am trying to out everything I see on the table and assess it. You think you have already done this and reach the logically best answer, unfortunatly it is obvious that you have just drunk cool aid. Good luck with that.
Seb
Member
Fri Oct 12 02:49:14
Nim:

Firstly what I'm objecting to is this:

"The word ”bias” is thrown around these days as inherently wrong and negative. It is assumed preferences for or against are arbitrary and/or irrational."

The whole point here is the bias is objectively shown to be irrational and arbitrary.

Your post seemed to suggest you were saying that, like your hoody example, the bias here is learned from some objective fact.

As for GA learning bias, literally *everyone* who knows how GAs work have been predicting they'd simply learn biases in training data for decades. Right since the "recognise a tank" debacle. Its was a plot point on an episode of the good wife - a mainstream drama aimed at women that stopped running a few years back. That people like Sam still talk about algos as being objective by default is almost an instant disqualifier of any technical knowledge at all.

Ok, you object to the word good, whatever you want to call it, let's say that's attribute is what a human interviewer might be selecting for. Theres oodles of evidence that framing bias would lead them to see those traits as negative in women because they make her a *bad* women (or at least, what the interviewer thinks of as a good woman) even if they make her a good programmer. People aren't that rational.

But making a rational and objective algo is also hard.

hood
Member
Fri Oct 12 07:18:32
Can we stop talking about hoods and hoodies? I'm getting mixed signals.
Nimatzo
iChihuaha
Fri Oct 12 10:50:26
>>The whole point here is the bias is objectively shown to be irrational and arbitrary.<<

The female bias towards males is not irrational or arbitrary. My bias towards thuggish (to make hood happy) looking people is not irrational or arbitrary. No. When the risk of failing may ruin you, most people become conservative in how they play their deck. You only need to get ruined once for the equation to output 0. They just scale poorly outside the provincial confines which these thins evolved it, but those contexts still exist, you live in it. You need to learn to keep those two thoughts inside your head at the same time. The same behavior (profiling) can be adaptive in one context and maladaptive in another.
hood
Member
Fri Oct 12 10:54:14
Merci for the lingo change.
Seb
Member
Fri Oct 12 11:16:01
Nim:

So, why are we talking about your bias against thuggish looking people?

We were talking about Amazon's hiring practices and how a GA had to explicitly "reverse engineer" a means of identifying gender to fit their sifts, meaning that the weightings of the relevant info in the CVs could not explain prior sift patterns.

That's the opposite of objective. The GA was unable to find the objective data to support it other than by creating a proxy gender identifier.

And even your thug example isn't *guaranteed* rational.

Most decisions we make are system 1 heuristics - quick, dirty and normally efficient but prone to sysyemic bias - which is why there are so many identifed cognitive biases - and not system 2 deliberative decisions based on attempted rational thought.

Avoiding hoodies is almost certainly system 1 type.

Excessive risk aversion and loss aversion are recognised cognitive biases that lead to irrational behaviour. For example, have you assessed the relative risk reduction in avoiding hoodies versus increased risk from detours and road crossings?

It's entirely possible that having identified one source of risk and developed a heuristic, it's actually increasing your risk profile despite a reasonable sounding rationalisation that feels rational, but which you don't necessarily have evidence for. You may also be engaging in confirmation bias (noting the number of people mugged by thuggish looking people, disregarding those who get run over when crossing roads etc.) or salience bias (Mugging seems a more relevant risk as it is more prominent in mind being associated with an agent in your path, being run over by a driver in the dark while distracted by a shadowy figure wearing a hoodie following you less salient as the car isn't present or visible at the time).




Hrothgar
Member
Sat Oct 13 14:01:33
"After reading some of the article, it sounds like it was just a shitty algorithm."

Here is where I call cultural BS - IF that shitty algorithm was making the error of commendation of hiring more women/less men you can guarantee there would have been a shit storm trying to pointing out the error and abandon it's use.
McKobb
Member
Sat Oct 13 14:04:40
You could just hire a woman to make the algorithm!
Nimatzo
iChihuaha
Mon Oct 15 11:14:10
>>So, why are we talking about your bias against thuggish looking people?<<

Because I had to preface my argument, by first explaining that ”bias” is not inherently a scary bad thing. If you believe this as a matter of principle, then it will be impossible/fruitless to make any further argument about a specific ”bias”. Which you seem to do. Hence.

You have not thought profiling through. Let’s do the reverse, let’s talk about who your danger radar is not worried about.

Women (50% gone)
Old men
Men in fancy suits
Midgets
Morbidly obese men
Etc.

You keep doing these subtractions we all know are rational to do and you suddenly end up with men aged 15-25. Tadaaa! Then you add the cultural impressions of your culture and how ”hoodlums dress”.

So are you on the look out for octogenarian thugs? Grats you are ”biased” against young men who dress a certain way. Unsurprisingly young men are overrepresented in crime and specifically violent crime, which we have always known. So bias is not inherently irrational or wrong, it can be adaptive or maladaptive. We do not live in a world were _anything_ is guranteed, you are either dead or alive, it is that kind of heuristics.
Seb
Member
Mon Oct 15 13:29:03
Nim:

There is a marked difference between "most violent people are young men" and "men are more likely to be violent to me".

That's salience bias - it's actually a known cognitive bias and irrational.

If the proportion of young men* who are violent is very low, then to take the fact that of people who are violent are likely to be men to then inform your attitude to young men in general may be quite unwarranted.

*Contracting the list of characteristics for brevity.

Nimatzo
iChihuaha
Sat Oct 20 16:46:04
The fact that a heuristic approach does not capture the entire truth or even most of the truth, is not relevant nor does it mean that you as an individual can function without them. You simply do not have the time or resources and limited health and one life. You are not thinking clearly because you are too caught up by what triggers you. So let's give a few more examples without direct human agents.

Do you drink water from puddles on the ground? Of course not. Do you not do this because you have scientific evidence that water in the puddle contains something bad for you?

Do you wear a seat belt? You do because of the law maybe, but despite the law, we have statistical data showing that accidents are actually a very rare occurrence. And I mean on the freeway, it won't even matter if you do.

In Sweden pedestrians have right of way at crossings. I still religiously look both ways, seek eye contact and then cross. Again statistics show these are very rare accidents. Call me old fashion, but I have a bias against high mass high velocity metal objects.

The same can be applied in the way educated people handle every gun, as if it is loaded. Risk of ruin, changes the equation substantially.

This applies in how we deal with other human beings, they can be very bad for your health or bad for your business and livelihood. They can ruin you for good.

>>There is a marked difference between "most violent people are young men" and "men are more likely to be violent to me".<<

So no, for this purpose there isn't much difference. Either you are alive or dead/seriously injured. Better safe than sorry, is a highly adaptive way of navigating risks with a lot of unknowns and limited time and resources when you risk your life. You and I are not walking around with unlimited resources, cognitive or other, our brain was not designed to see the world for what it is, but to see it well enough to survive long enough to procreate. And wish it as we may, we have not outgrown them, nor can we get rid of them very easily.

The point is that if you want to change this type of behavior that got us this far, but that you now think are an obstacle in the way of getting further, you may want to do more than call people racist/sexist. It only serves to alienate people who may act on instincts our collective effort doesn't understand very well, let alone us as individuals. You may want to understand what they are for, how they work, when they serve us and when they become maladaptive.

I tell you, what you think you are doing and what you actually achieve on this matter, are very different. Ypu pretty much did this through out our discussion about Islam, immigration and feminism. Waaa nimatzo how rude, how uncouth how this and that. I have a PhD from the best school!! I didn't work well, indeed it made me think less of you and go, here is a virtue signaling moron, completely unburdened by substance. I am sure you disagree, but this what I (someone who could argue his case) thought, others would just simmer in their resentment towards you, the consequences delivered later in the form of election results.

I am not trying to burdening you with everything bad in the world, but this type of simplistic thinking you express regarding complex human psychology, it is as corrosive as it is stupid.
Seb
Member
Sun Oct 21 10:18:45
Nim:

I didn't say heuristics were dispensible. I said they were not rational. And that's the key point: some decisions are better if deliberative, or law requires to be deliberative. Recruitment is one. I'm still not sure what your original point was other than, perhaps, a sense that others might be using the term "Bias" to mean, perhaps, "immoral". I'm guessing here because your exact phrase here was "inherently wrong or negative" and that it is (wrongly) assumed biases are irrational.

Well, biases in the form of heuristics are certainly irrational, as we discussed.

And in this context (recruitment displaying bias against certain characteristics) it's both wrong and irrational.
The law pretty much requires a rational approach rather than a heuristic approach at least on this element of recruitment.

Recruitment is supposed to be free from bias with respect to certain criteria so it is here inherently wrong and negative.

At no point did anyone suggest every decision should be made by a Spock like process of logical deduction and quantitative analysis.

As I think I've said many many times: people are not rational (and could not function if they were to try to be). Models based on rational actors will fail under many circumstances.


Not that it's key but some of your examples are pretty poor. It would be rational to wear a seatbelt even if accidents are rare if the burden of doing so is low, for example.

"So no, for this purpose there isn't much difference."

Actually, there is. Those are two very different probabilities and frequencies of occurrences. The whole point of risk aversion and other similar cognative biases is that we approach the former as the latter as there is a selection bias for reproduction on evolutionary timescales.

But the confusion between the two in the more complex environment we live in now, and for an individual where existence often isn't at stake. This often mean that we in fact we sonorously take on more risk doing the thing that "feels" less risky - the heuristic no longer works effectively.

The way to change the behaviour in this context is obvious. Huge fines for company's that can be shown to be biased or take a lack of sensible steps to reduce the potential for bias in recruitment.

So obviously when Amazon realised their recruitment tool was biased, they stopped using it, and I assume will be looking at their traditional process to understand the source of bias, to avoid legal exposure.

I'm sorry you feel that accusations of bias are fundamentally a moral statement but hey-ho, we can't let ourselves ignore this issue just because it makes you feel sad.

If it helps, I think the truly immoral thing is less having a bias, but going to enormous lengths to try and pretend they are ok and legitimate and shouldn't be challenged or talked about.





Seb
Member
Sun Oct 21 10:22:11
Also, the PhD stuff isn't about bias. It's because you keep making really shit arguments supposedly based on science that are demonstrably don't stack up and then wrapping yourself in the flag - so to speak - as somehow speaking on behalf of objective truth. It's utterly exasperating to have someone cargo-cult science at you who has nary done a real literature review in their lives.

show deleted posts

Your Name:
Your Password:
Your Message:
Bookmark and Share