dinsdag 5 juni 2007

MVO referentiekader

MVO referentiekader NL (pdf)

College 11 (deel 5)

Maslov en het motiveren van werknemers

Modern Management

Het Tv-programma "Modern Management" van Teleac begint als volgt: Van alle uitdagingen waarmee managers te maken krijgen is het motiveren van medewerkers psychologisch waarschijnlijk de meest complexe. Het vereist inzicht in de drijfveren van mensen en het vermogen om een omgeving te scheppen waarin die drijfveren tot hun recht komen.

Daarna gaat men in op de behoeftehiƫrarchie van Maslov. Centraal staat dat elk mens behoefte heeft aan:

Voeding (fysiologische behoeften); bij Etzioni: "food", Veiligheid en beschutting (safety and security); bij Etzioni: "shelter", Genegenheid, onderdeel zijn van een groter geheel, ergens bij horen (need for belongingness); bij Etzioni: "affection", Waardering, respect en erkenning, Zelfverwerkelijking - Zelfrealisatie (persoonlijke groei en zelfontwikkeling alsmede zelfexpressie en het kunnen volgen van de eigen personal drives).
Van belang hierbij is het feit, dat als mensen er alleen voor staan en ook nog eens bedreigd worden, dat ze uitsluitend gaan voor hun fysiologische behoeften (zoals dieren). Zodra er regelmatig of in voldoende mate eten en drinken aanwezig is, dan gaan zij zich (= kunnen zij zich) pas richten op kleding (warme kleding, regen kleding), op (een veilige) behuizing (dak boven hun hoofd), etc... Pas nadat ook aan deze basisbehoefte is voldaan, ontstaat er ruimte voor "liefde" of in ieder geval voor beperktere vormen van de "need for belongingness". De indeling van Maslov is met andere woorden hiƫrarchisch opgebouwd. Etzioni (niet in de uitzending van "Modern
Management") maakt het wat simpeler. Hij gaat uit van een vierdeling met lagere Human Needs: food and shelter (nrs. 1 en 2) en hogere Human Needs (nrs. 3, 4 en 5).

De lagere "Human Needs" hebben te maken met de fysieke omgeving van mensen.
De "need for belongingness" en de "behoefte aan waardering" hebben te maken met de menselijke omgeving. Dat je ook daar iets aan kunt doen ofwel er rekening mee kunt houden, was in de tijd van Taylor volledig nieuw.
"Arbeiders waren lui en liepen de kantjes eraf. Je moest daarom wel een streng beheers- en controle apparaat in gaan zetten". Dankzij onder meer het zgn. Hawnthorne experiment veranderde de visie op arbeiders - op de Human Resources.

Vervolgens treden diverse directeuren en managers voor het voetlicht die aangeven wat zij gedaan hebben om aan de hogere "Human Needs" van hun eigen personeel tegemoet te komen.
Voor de "need for belongingness (nr. 3) waren dat onder meer:
A) het opzetten van Quality Circles,
B) het opzetten van Self Managing Teams,
C) het betrekken van medewerkers bij (proces) veranderingen en nieuw beleid Voor "Zelfverwerkelijking" (nr. 5) noemde men zaken als:
D) plaatsing in functies op basis van Personal Drives,
E) plaatsing op "betere" functies - job enrichment,
F) persoonsgericht loopbaanbeleid, e.d..
Men gaf aan dat men met voorgaande ook tegemoet kwam aan de basisbehoefte voor respect en erkenning (nr. 4): Medewerkers werden behandeld als volwassenen, werden met respect behandeld, ontvingen (positieve) feedback, managers stopten met het alleen maar reageren als dingen fout gingen, etc..

College 11 (deel 4)

By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.

Nice
The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does. Almost all of the top-scoring strategies were nice. Therefore a purely selfish strategy for purely selfish reasons will never hit its opponent first.

Retaliating
However, Axelrod contended, the successful strategy must not be a blind optimist. It must always retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such softies.

Forgiving
Another quality of successful strategies is that they must be forgiving. Though they will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge, maximizing points.

Non-envious
The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a 'nice' strategy, i.e., a 'nice' strategy can never score more than the opponent). Therefore, Axelrod reached the Utopian-sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious. One of the most important conclusions of Axelrod's study of IPDs is that Nice guys can finish first.

The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends upon the strategies of likely opponents, and how they will react to defections and cooperations. For example, consider a population where everyone defects every time, except for a single individual following the Tit-for-Tat strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy for that individual is to defect every time. In a population with a certain percentage of always-defectors and the rest being Tit-for-Tat players, the optimal strategy for an individual depends on the percentage, and on the length of the game.

College 11 (deel 3)

This analysis of the one-shot game is in complete contradiction to classical game theory, but follows naturally from the symmetry between the two players:

an optimal strategy must be the same for both players the result must lie on the diagonal of the payoff matrix maximize return from solutions on the diagonal cooperate

[edit] Morality
While it is normally thought that morality must involve the constraint of self-interest, David Gauthier famously argues that co-operating in the prisoners dilemma on moral principles is consistent with self-interest and the axioms of game theory. It's most prudent to give up straightforward maximizing and instead adopt a disposition of constrained maximization, according to which one resolves to cooperate with all similarly disposed persons and defect on the rest. In other words, moral constraints are justified because they make us all better off, in terms of our preferences (whatever they may be). This form of contractarianism claims that good moral thinking is just an elevated and subtly strategic version of plain old means-end reasoning. Those that defect can be predicted because people are not completely opaque.

Douglas Hofstadter expresses a strong personal belief that the mathematical symmetry is reinforced by a moral symmetry, along the lines of the Kantian categorical imperative: defecting in the hope that the other player cooperates is morally indefensible. If players treat each other as they would treat themselves, then off-diagonal results cannot occur.

College 11 (deel 2)

The classical prisoner's dilemma
The Prisoner's dilemma was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence payoffs and gave it the "Prisoner's Dilemma" name (Poundstone, 1992).

The classical prisoner's dilemma (PD) is as follows:

Two suspects, A and B, are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both stay silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a two-year sentence. Each prisoner must make the choice of whether to betray the other or to remain silent. However, neither prisoner knows for sure what choice the other prisoner will make. So this dilemma poses the question: How should the prisoners act?
The dilemma can be summarized thus:

Prisoner B Stays Silent Prisoner B Betrays Prisoner A Stays Silent Each serves six months Prisoner A serves ten years Prisoner B goes free Prisoner A Betrays Prisoner A goes free Prisoner B serves ten years Each serves two years

The dilemma arises when one assumes that both prisoners only care about minimizing their own jail terms. Each prisoner has two options: to cooperate with his accomplice and stay quiet, or to defect from their implied pact and betray his accomplice in return for a lighter sentence. The outcome of each choice depends on the choice of the accomplice, but each prisoner must choose without knowing what his accomplice has chosen to do.

In deciding what to do in strategic situations, it is normally important to predict what others will do. This is not the case here. If you knew the other prisoner would stay silent, your best move is to betray as you then walk free instead of receiving the minor sentence. If you knew the other prisoner would betray, your best move is still to betray, as you receive a lesser sentence than by silence. Betraying is a dominant strategy. The other prisoner reasons similarly, and therefore also chooses to betray. Yet by both defecting they get a lower payoff than they would get by staying silent. So rational, self-interested play results in each prisoner being worse off than if they had stayed silent. In more technical language, this demonstrates very elegantly that in a non-zero sum game a Nash Equilibrium need not be a Pareto optimum.

Note that the paradox of the situation lies in that the prisoners are not defecting in hope that the other will not. Even when they both know the other to be rational and selfish, they will both play defect. Defect is what they will play no matter what, even though they know fully well that the other player is playing defect as well and that they will both be better off with a different result.

Note that the "Stay Silent" and "Betray" strategies may be known as "don't confess" and "confess", or the more standard "cooperate" and "defect", respectively.

[edit] Generalized form
We can expose the skeleton of the game by stripping it of the Prisoners' subtext. The generalized form of the game has been used frequently in experimental economics. The following rules give a typical realization of the game.

There are two players and a banker. Each player holds a set of two cards:
one printed with the word "Cooperate", the other printed with "Defect" (the standard terminology for the game). Each player puts one card face-down in front of the banker. By laying them face down, the possibility of a player knowing the other player's selection in advance is eliminated (although revealing one's move does not affect the dominance analysis[1]). At the end of the turn, the banker turns over both cards and gives out the payments accordingly.

If player 1 (red) defects and player 2 (blue) cooperates, player 1 gets the Temptation to Defect payoff of 5 points while player 2 receives the Sucker's payoff of 0 points. If both cooperate they get the Reward for Mutual Cooperation payoff of 3 points each, while if they both defect they get the Punishment for Mutual Defection payoff of 1 point. The checker board payoff matrix showing the payoffs is given below.

Canonical PD payoff matrix Cooperate Defect Cooperate 3, 3 0, 5 Defect 5, 0 1, 1

In "win-lose" terminology the table looks like this:

Cooperate Defect
Cooperate win-win lose much-win much
Defect win much-lose much lose-lose

These point assignments are given arbitrarily for illustration. It is possible to generalize them. Let T stand for Temptation to defect, R for Reward for mutual cooperation, P for Punishment for mutual defection and S for Sucker's payoff. The following inequalities must hold:

T > R > P > S

In addition to the above condition, if the game is repeatedly played by two players, the following condition should be added.[2]

2 R > T + S

If that condition does not hold, then full cooperation is not necessarily Pareto optimal, as the players are collectively better off by having each player alternate between cooperate and defect.

These rules were established by cognitive scientist Douglas Hofstadter and form the formal canonical description of a typical game of Prisoners Dilemma.