Information

How to make a comparative analysis between the decision-making robots and human?

How to make a comparative analysis between the decision-making robots and human?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I have the challenge of finding a way to model, develop, validate and implement an artificial cognitive architecture for mobile robots for industrial environment. In my present scope of the questions is to establish a method for evaluating the level of "intelligence" of these robots, or generations of virtual or physical robots, from a comparison with the "intelligence" of human and other animals.


Between “Paralysis by Analysis” and “Extinction by Instinct”

5 free articles per month, $6.95/article thereafter, free newsletter.

Subscribe

Unlimited digital content, quarterly magazine, free newsletter, entire archive.

I come from an environment where, if you see a snake, you kill it. At GM, if you see a snake, the first thing you do is go hire a consultant on snakes. Then you get a committee on snakes, and then you discuss it for a couple of years. The most likely course of action is — nothing. You figure the snake hasn’t bitten anybody yet, so you just let him crawl around on the factory floor.” — Ross Perot1

As time passes, old management formulas become outmoded and are replaced by new ones, but the underlying message is often the same: formal analysis — the systematic study of issues — can help organizations make better decisions. This seemingly plausible hypothesis is supported by an extensive literature in cognitive psychology that shows convincingly that unaided human judgment is frequently flawed.2 For example, people seem to be unduly influenced by recent or vivid events, consistently underestimate the role of chance, and are often guilty of “wishful thinking.” Formal analytical techniques are a way to avoid such problems.

However, the “rational” approach has also had some influential detractors.3 For example, Peters and Waterman condemn formal analysis for its bias toward negative responses, its degree of abstraction from reality, its inability to deal adequately with nonquantifiable values, its inflexibility and bias against experimentation, and finally its tendency to lead to paralysis. In fact, most of us are familiar with “paralysis by analysis.𠇔 If we have not experienced it in our own working environment, we have certainly seen it in the unending parade of studies, inquiries, papers, and reports of all shapes, sizes, and colors emerging from government agencies. And, as the opening quotation shows, large private corporations are also a fertile breeding ground for this disease.

Thus managers need to navigate between two deadly extremes: on the one hand, ill-conceived and arbitrary decisions made without systematic study and reflection (𠇎xtinction by instinct”) and on the other, a retreat into abstraction and conservatism that relies obsessively on numbers, analyses, and reports (“paralysis by analysis”). But why do some organizations become bogged down in analysis? Why are certain decisions insufficiently analyzed? How can rationality and efficiency be combined? These are the issues I explore in this article.

Read the Full Article

Topics

About the Author

Ann Langley is a professor in the department of administrative science, Université du Québec à Montréal.

References

1. R. Perot, “The GM System Is Like a Blanket of Fog,” Fortune, 15 February 1988, pp. 48�.

D. Kahneman, P. Slovik, and A. Tversky, Judgement under Uncertainty: Heuristics and Biases (Cambridge, England: Cambridge University Press, 1982)

R.M. Hogarth, Judgement and Choice: The Psychology of Decision(Chichester, England: Wiley, 1980) and

R. Nisbett and L. Ross, Human Inference: Strategies and Shortcomings of Social Judgment (Englewood Cliffs, New Jersey: Prentice-Hall, 1980).

3. See, for example, H. Mintzberg, Mintzberg on Management (New York: Free Press, 1989)

R.T. Pascale and A.G. Athos, The Art of Japanese Management (New York: Warner Books, 1981) and

T.J. Peters and R.H. Waterman, In Search of Excellence (New York: Harper & Row, 1982).

4. The expressions were borrowed from:

F.E. Kast and J.E. Rosenzweig, Organization and Management: A Systems Approach (New York: McGraw-Hill, 1970).

5. For further details, see:

A. Langley, “In Search of Rationality: The Purposes behind the Use of Formal Analysis in Organizations,” Administrative Science Quarterly 34 (1989): 598�.

6. M.S. Feldman and J.G. March, “Information in Organizations as Signal and Symbol,” Administrative Science Quarterly 26 (1981): 171�

J. Pfeffer, Managing with Power (Boston, Massachusetts: Harvard Business School Press, 1992)

T.H. Davenport, R.G. Eccles, and L. Prusak, “Information Politics,” Sloan Management Review, Fall 1992, pp. 53� and

M. Feldman, Order without Design: Information Production and Policy Making (Stanford, California: Stanford University Press, 1989).

7. My hypotheses on the roles of participation and power are supported by classic literature that relates organizational size with formalization. Smaller organizations are more centralized than larger ones, and decision making therefore tends to be very informal. See:

D. Pugh, D. Hickson, and C. Hinings, 𠇊n Empirical Taxonomy of Structures of Work Organizations,” Administrative Science Quarterly 14 (1969): 115�.

8. This statement is perhaps more controversial than previous ones. Some authors have argued that formal analysis is more applicable to situations where there is little uncertainty or conflict. See, for example: J.W. Dean and M.P. Sharfman, “Procedural Rationality in the Strategic-Decision Making Process,” Journal of Management Studies 30 (1993): 587�.

My data suggest that conflict and uncertainty cause people to generate more analysis, although this analysis may sometimes have limited effect on decisions. See:

J.G. March and H.A. Simon, Organizations (New York: Wiley, 1958) and

9. See K. Eisenhardt, “Making Fast Strategic Decisions in High-Velocity Environments,” Academy of Management Journal 32 (1989): 543�.

D. Robey and W. Taggart, “Measuring Managers’ Minds: The Assessment of Style in Human Information Processing,” Academy of Management Review 6 (1981): 375�.

11. See, for example, Nisbett and Ross (1980).

12. J.L. Bower, Managing the Resource Allocation Process (Homewood, Illinois: Irwin, 1970)

R. Burgelman, 𠇊 Process Model of Internal Corporate Venturing in the Diversified Major Firm,” Administrative Science Quarterly 28 (1983): 223� and

J.W. Dean, Deciding to Innovate (Cambridge, Massachusetts: Ballinger, 1987).

13. For an interesting view on the strategic role of middle management, see:

S.W. Floyd and B. Woolridge, 𠇍inosaurs or Dynamics? Recognizing Middle Management’s Strategic Role,” The Executive 8 (1994): 47�.

14. See, for example, A. Taylor III, �n GM Remodel Itself?” Fortune, 13 January 1992, pp. 26�

K. Kenwin, J.B. Treece, and D. Woodruff, �n Jack Smith Fix GM?” Business Week, 1 November 1993, pp. 126�

J.W. Verity, T. Peterson, D. Depke, and E.I. Schwartz, “The New IBM,” Business Week, 16 December 1991, pp. 112� and

T.J. Peters, “Prometheus Barely Unbound,” The Executive 4 (1990): 70�.

15. B. Dumaine, “The Bureaucracy Busters,” Fortune, 17 June 1991, pp. 36� and

R.H. Howard, “The CEO as Organizational Architect: An Interview with Xerox’s Paul Allaire,” Harvard Business Review, September–October 1992, pp. 106�.

16. L.B. Barnes and M.P. Kriger, “The Hidden Side of Organizational Leadership,” Sloan Management Review, Fall 1986, pp. 15� Eisenhardt (1989) and

D.C. Wilson, 𠇎lectricity and Resistance: A Case Study of Innovation and Politics,” Organization Studies 3 (1982): 119�.

17. For similar remarks, see:

E. Jacques, “In Praise of Hierarchy,” Harvard Business Review, January�ruary 1990, pp. 127� and

L. Hirschhorn and T. Gilmore, “The New Boundaries of the 𠆋oundaryless’ Company,” Harvard Business Review, May–June 1992, pp. 104�.

19. A. Taylor III, 𠇊 U.S.-Style Shake-up at Honda,” Fortune, 30 December 1991, pp. 115�.

20. R.L. Daft, R.H. Lengel, and L.K. Trevino, “Message Equivocality, Media Selection, and Manager Performance: Implications for Information Systems,” MIS Quarterly 11 (1987): 355�.

21. I.I. Mitroff, J.R. Emshoff, and R.H. Kilmann, 𠇊ssumptional Analysis: A Methodology for Strategic Problem Solving,” Management Science 25 (1979): 583�.

For a somewhat different methodology, see:

P. Checkland and J. Scholes, Soft Systems Methodology in Action(Chichester, England: Wiley, 1990).

22. See B.M. Bass, Stogdill’s Handbook of Leadership (New York: Free Press, 1981).

23. See Eisenhardt (1989).

24. Barnes and Kriger (1986).

25. The term �ision vacuum” was inspired by R.G. Corwin and K.S. Louis, “Organizational Barriers to the Utilization of Research,”Administrative Science Quarterly 27 (1982): 623�. These authors describe how public sector evaluation research is often ignored because it falls into a “policy vacuum” where there is no clearly identifiable sponsor who might have any need for it.

26. K.H. Hammonds and G. DeGeorge, “Where Did They Go Wrong?” Business Week (special issue on quality management), 1991, pp. 34�.

27. J. Hudiberg, “Quality Fever at Florida Power,” inset in J. Main, “Is the Baldridge Overblown?” Fortune, 1 July 1991, pp. 62�.

28. D. Greising, “Selling a Bright Idea — Along with the Kilowatts,” Business Week, 8 August 1994, p. 59.

29. I.L. Janis, Victims of Groupthink (Boston, Massachusetts: Houghton-Mifflin, 1972) and

G. Whyte, “Groupthink Reconsidered,” Academy of Management Review 14 (1989): 40�.

30. J. Huey, “Secrets of Great Second Bananas,” Fortune, 6 May 1991, pp. 64�.

31. J.M. Howell and B.J. Avolio, “The Ethics of Charismatic Leadership: Submission or Liberation,” The Executive 6 (1992): 43�

D. Miller, The Icarus Paradox (New York: HarperBusiness, 1990) and

J.A. Byrne, C. Symonds, and J. Flynn, �O Disease,” Business Week, 1 April 1991, pp. 52�.

32. A.O. Hirschman, Exit, Voice, and Loyalty (Cambridge, Massachusetts: Harvard Business School Press, 1970).

33. There has been considerable debate in the management literature on the relative merits of the devil’s advocate and dialectic enquiry approaches for decision making. Cosier and Schwenk represent a reconciliation (compromise or synergistic solution) by opposing protagonists of the two different methods. See:

R.A. Cosier and C.R. Schwenk, 𠇊greement and Thinking Alike: Ingredients for Poor Decisions,” The Executive 4 (1990): 69�. For a scientific comparison of dialectics, devil’s advocate, and consensus procedures for decision making, see:

D.M. Sweiger, W.R. Sandberg, and J.W. Ragan, “Group Approaches for Improving Decision Making: A Comparative Analysis of Dialectical Inquiry, Devil’s Advocacy, and Consensus,” Academy of Management Review 29 (1986): 51�.

34. Mitroff et al. (1979).

35. J.A. Pearce III and S.A. Zahra, “The Relative Power of CEOs and Boards of Directors: Associations with Corporate Performance,”Strategic Management Journal 12 (1991): 135�.

36. For an accessible description of agency theory and its relevance to organization theory, see:

K. Eisenhardt, 𠇊gency Theory: An Assessment and Review,” Academy of Management Review 14 (1989): 57�.

38. Much of the discussion in this section is inspired by Allaire and Firsirotu. See:

Y. Allaire and M. Firsirotu, “Strategic Plans as Contracts,” Long Range Planning 23 (1991): 102�.

39. H.S. Geneen, Managing (Garden City, New York: Doubleday, 1984).

40. See Peters and Waterman (1982) and

Allaire and Firsirotu (1991).

41. Verity et al. (1991).

42. Y. Allaire and M. Firsirotu, “How to Implement Radical Strategies in Large Organizations,” Sloan Management Review, Spring 1985, pp. 19�

G. Donaldson, and J.W. Lorsch, Decision Making at the Top (New York: Basic Books, 1983) and

J. Huey, “Nothing Is Impossible,” Fortune, 23 September 1991, pp. 134�.


Key Points

Decision Matrix Analysis helps you to decide between several options, where you need to take many different factors into account.

To use the tool, lay out your options as rows on a table. Set up the columns to show the factors you need to consider. Score each choice for each factor using numbers from 0 (poor) to 5 (very good), and then allocate weights to show the importance of each of these factors.

Multiply each score by the weight of the factor, to show its contribution to the overall selection. Finally add up the total scores for each option. The highest scoring option will be the best option.

Decision Matrix Analysis is the simplest form of Multiple Criteria Decision Analysis (MCDA), also known as Multiple Criteria Decision Aid or Multiple Criteria Decision Management (MCDM). Sophisticated MCDA can involve highly complex modeling of different potential scenarios, using advanced mathematics.

A lot of business decision making, however, is based on approximate or subjective data. Where this is the case, Decision Matrix Analysis may be all that&rsquos needed.

This site teaches you the skills you need for a happy and successful career and this is just one of many tools and resources that you'll find here at Mind Tools. Subscribe to our free newsletter, or join the Mind Tools Club and really supercharge your career!


An Overview Of Decision Making Models

The many decision making models that exist nowadays means that you even have to make a decision as to which one to use! There are rational models, intuitive models, rational-iterative models as well as 5, 6, 7 and even 9 step decision models.

Most, however, move through each of the basic stages in decision making.

On this page we will quickly scan over the main points of some of these decision models so that you have a sense of what's available.

Some of these decision making models presuppose that decision making is the same as problem solving. Frequently, the first step in the decision making process is to identify the problem. I don't believe that every decision is solving a problem. For example, deciding whether you want dark chocolate or milk chocolate is not, in and of itself, a problem frame.

I also understand that for some people decision making can be a problem! But that does not mean that they are the same thing. So my descriptions and ideas below keep these things separate.

Rational decision making models

Decision matrix analysis, Pugh matrix, SWOT analysis, Pareto analysis and decision trees are examples of rational models and you can read more about the most popular here.

This type of model is based around a cognitive judgement of the pros and cons of various options. It is organized around selecting the most logical and sensible alternative that will have the desired effect. Detailed analysis of alternatives and a comparative assessment of the advantages of each is the order of the day.

Rational decision models can be quite time consuming and often require a lot of preparation in terms of information gathering. The six step decision making process is a classic example in this category and you can read about the 9 step model here.

The Vroom-Jago decision model helps leaders decide how much involvement their teams and subordinates should have in the decision making process.

Seven step decision making model

The seven step model was designed for choosing careers and may be classed as a rational decision making model. The seven steps are designed to firstly identify the frame of the decision. Based on the information available, alternatives are generated. Further information is then gathered about these alternatives in order to choose the best one.

Have a favorite Model?

But what happens when there's too much information? How do you separate the useful from the worthless? And then, of course, the world is changing so rapidly that the information is also changing rapidly. But waiting for things to stabilize may cause a delay in decision making which may, in turn, lead to missed opportunities.

Many think the way forward involves reharnessing the power of our intuition.

Intuitive decision making models

Some people consider these decisions to be unlikely coincidences, lucky guesses, or some kind of new-age hocus-pocus. Many universities are still only teaching rational decision models and suggest that if these are not used, failure results. Some researchers are even studying the logic behind the intuitive decision making models!

The groups who study intuitive decision models are realizing that it's not simply the opposite of rational decision making.

In military schools the rational, analytical models have historically been utilized. It is also long been recognized, however, that once the enemy is engaged the analytical model may do more harm than good. History is full of examples where battles have more often been lost by a leader's failure to make a decision than by his making a poor one.

"A good plan, executed now, is better than a perfect plan next week."

The military are educating the soldiers of every rank in how to make intuitive decisions. Information overload, lack of time and chaotic conditions are poor conditions for rational models. Instead of improving their rational decision making, the army has turned to intuitive decision models. Why? Because they work!

Recognition primed decision making model

If they don't think it will work, they choose another, and mentally rehearse that. As soon as they find one that they think will work, they do it. Again past experience and learning plays a big part here.

There is no actual comparison of choices, but rather a cycling through choices until an appropriate one is found.

Obviously people become better with this over time as they have more experiences and learn more patterns.

For more examples of decision models, you can read a long list here.


A Comparative Analysis of Neural Networks and Statistical Methods for Predicting Consumer Choice

This paper presents a definitive description of neural network methodology and provides an evaluation of its advantages and disadvantages relative to statistical procedures. The development of this rich class of models was inspired by the neural architecture of the human brain. These models mathematically emulate the neurophysical structure and decision making of the human brain, and, from a statistical perspective, are closely related to generalized linear models. Artificial neural networks are, however, nonlinear and use a different estimation procedure (feed forward and back propagation) than is used in traditional statistical models (least squares or maximum likelihood). Additionally, neural network models do not require the same restrictive assumptions about the relationship between the independent variables and dependent variable(s). Consequently, these models have already been very successfully applied in many diverse disciplines, including biology, psychology, statistics, mathematics, business, insurance, and computer science.

We propose that neural networks will prove to be a valuable tool for marketers concerned with predicting consumer choice. We will demonstrate that neural networks provide superior predictions regarding consumer decision processes. In the context of modeling consumer judgment and decision making, for example, neural network models can offer significant improvement over traditional statistical methods because of their ability to capture nonlinear relationships associated with the use of noncompensatory decision rules. Our analysis reveals that neural networks have great potential for improving model predictions in nonlinear decision contexts without sacrificing performance in linear decision contexts.

This paper provides a detailed introduction to neural networks that is understandable to both the academic researcher and the practitioner. This exposition is intended to provide both the intuition and the rigorous mathematical models needed for successful applications. In particular, a step-by-step outline of how to use the models is provided along with a discussion of the strengths and weaknesses of the model. We also address the robustness of the neural network models and discuss how far wrong you might go using neural network models versus traditional statistical methods.

Herein we report the results of two studies. The first is a numerical simulation comparing the ability of neural networks with discriminant analysis and logistic regression at predicting choices made by decision rules that vary in complexity. This includes simulations involving two noncompensatory decision rules and one compensatory decision rule that involves attribute thresholds. In particular, we test a variant of the satisficing rule used by Johnson et al. (Johnson, Eric J., Robert J. Meyer, Sanjoy Ghose. 1989. When choice models fail: Compensatory models in negatively correlated environments. J. Marketing Res.26(August) 255–270.) that sets a lower bound threshold on all attribute values and a “latitude of acceptance” model that sets both a lower threshold and an upper threshold on attribute values, mimicking an “ideal point” model (Coombs and Avrunin [Coombs, Clyde H., George S. Avrunin. 1977. Single peaked functions and the theory of preference. Psych. Rev.84 216–230.]). We also test a compensatory rule that equally weights attributes and judges the acceptability of an alternative based on the sum of its attribute values. Thus, the simulations include both a linear environment, in which traditional statistical models might be deemed appropriate, as well as a nonlinear environment where statistical models might not be appropriate. The complexity of the decision rules was varied to test for any potential degradation in model performance. For these simulated data it is shown that, in general, the neural network model outperforms the commonly used statistical procedures in terms of explained variance and out-of-sample predictive accuracy.

An empirical study bridging the behavioral and statistical lines of research was also conducted. Here we examine the predictive relationship between retail store image variables and consumer patronage behavior. A direct comparison between a neural network model and the more commonly encountered techniques of discriminant analysis and factor analysis followed by logistic regression is presented. Again the results reveal that the neural network model outperformed the statistical procedures in terms of explained variance and out-of-sample predictive accuracy. We conclude that neural network models offer superior predictive capabilities over traditional statistical methods in predicting consumer choice in nonlinear and linear settings.


How does automation change crime control?

The automation of policing

By means of CompStat (COMPuter STATistics, or COMParative STATistics), geospatial modelling for predicting future crime concentrations, or ‘hot spots’, Footnote 3 has developed into a paradigm of managerial policing employing Geographic Information Systems (GIS) to map crime. This has been advocated as a multi-layered dynamic approach to crime reduction, quality of life improvement, personnel and resource management, and not merely a computer programme. The idea is not solely to ‘see crime’ visually presented on a map, but rather to develop a comprehensive managerial approach or a police management philosophy. As a ‘human resource management tool’, it involves ‘weekly meetings where officers review recent metrics (crime reports, citations, and other data) and talk about how to improve those numbers.’ Footnote 4

Compared to algorithmic prediction software, the CompStat system is calibrated less frequently. As a police officer from Santa Cruz (USA) reported: ‘I’m looking at a map from last week and the whole assumption is that next week is like last week […]’. Footnote 5 CompStat relies more on humans to recognise patterns. Nevertheless, it incorporated for the first time the idea of seeing how crime evolves and focusing on ‘the surface’ and not the causes of crime. In this context, Siegel argues with respect to predictive analytics: ‘We usually don’t know about causation, and we often don’t necessarily care […] the objective is more to predict than it is to understand the world […]. It just needs to work prediction trumps explanation.’ Footnote 6 In comparison to AI analytics, its limiting factor is the depth of the information and the related breadth of analysis. The amount of data is not the problem as agencies collect vast amounts of data every day rather, the next challenge is the ability to pull operationally-relevant knowledge from the data collected.

Computational methods of ‘predictive crime mapping’ started to enter into crime control twelve years ago. Footnote 7 Predictive ‘big data’ policing instruments took another evolutionary step forward. First, advancements in AI promised to make sense of enormous amounts of data and to extract meaning from scattered data sets. Secondly, they represented a shift from being a decision support system to being a primary decision-maker. Thirdly, they are aimed at the regulation of society at large and not only the fight against crime. (For an example of ‘function-creep’, see Singapore’s ‘total information awareness system programme’.) Footnote 8

Police are using AI tools to penetrate deeply into the preparatory phase of crime which is yet to be committed, as well as to scrutinise already-committed crimes. With regard to ex-ante preventive measures, automation tools are supposed to excavate plotters of crimes which are yet to be committed from large amounts of data. Hence, a distinction is made between tools focusing on ‘risky’ individuals (‘heat lists’—algorithm-generated lists identifying people most likely to commit a crime) Footnote 9 and tools focusing on risky places (‘hot spot policing’). Footnote 10 With regard to the second, ex-post-facto uses of automation tools, there have been many success stories in the fight against human trafficking. In Europe, Interpol manages the International Child Sexual Exploitation Image Database (ICSE DB) to fight child sexual abuse. The database can facilitate the identification of victims and perpetrators through an analysis of, for instance, furniture and other mundane items in the background of abusive images—e.g., it matches carpets, curtains, furniture, and room accessories—or identifiable background noise in the video. Chatbots acting as real people are another advancement in the fight against grooming and webcam ‘sex tourism’. In Europe, the Dutch children’s rights organisation Terre des Hommes was the first NGO to combat webcam child ‘sex tourism’ by using a virtual character called ‘Sweetie’. Footnote 11 The Sweetie avatar, posing as a ten-year-old Filipino girl, was used to identify offenders in chatrooms and online forums and operated by an agent of the organisation, whose goal was to gather information on individuals who contacted Sweetie and solicited webcam sex. Moreover, Terre des Hommes started engineering an AI system capable of depicting and acting as Sweetie without human intervention in order to not only identify persistent perpetrators but also to deter first-time offenders.

Some other research on preventing crime with the help of computer vision and pattern recognition with supervised machine learning seems outright dangerous. Footnote 12 Research on automated inferring of criminality from facial images based on still facial images of 1,856 real persons (half convicted) yielded the result that there are merely three features for predicting criminality: lip curvature, eye inner corner distance, and nose-mouth angle. The implicit assumptions of the researchers were that, first, the appearance of a person’s face is a function of innate properties, i.e., the understanding that people have an immutable core. Secondly, that ‘criminality’ is an innate property of certain (groups of) people, which can be identified merely by analysing their faces. And thirdly, in the event of the first two assumptions being correct, that the criminal justice system is actually able to reliably determine such ‘criminality’, which implies that courts are (or perhaps should become) laboratories for the precise measurement of people’s faces. The software promising to infer criminality from facial images Footnote 13 in fact illuminated some of the deep-rooted misconceptions about what crime is, and how it is defined, prosecuted, and adjudicated. The once ridiculed phrenology from the nineteenth century hence entered the twenty-first century in new clothes as ‘algorithmic phrenology’, which can legitimise deep-rooted implicit biases about crime. Footnote 14 Two researchers, Wu and Zhang, later admitted that they ‘agree that the pungent word criminality should be put in quotation marks a caveat about the possible biases in the input data should be issued. Taking a court conviction at its face value, i.e., as the ‘ground truth’ for machine learning, was indeed a serious oversight on our part.’ Footnote 15 Nevertheless, their research revealed how, in the near future, further steps along the line of a corporal focus on crime control can reasonably be expected: from the analysis of walking patterns, posture, and facial recognition for identification purposes, to analysis of facial expressions and handwriting patterns for emotion recognition and insight into psychological states.

Automation in criminal courts

Courts use AI systems to assess the likelihood of recidivism and the likelihood of flight of those awaiting trial, or of offenders in bail and parole procedures. The most analysed and discussed examples come from the USA, which is also where most such software is currently being used. Footnote 16 The Arnold Foundation algorithm, which is being rolled out in 21 jurisdictions in the USA, Footnote 17 uses 1.5 million criminal cases to predict defendants’ behaviour in the pre-trial phase. Florida uses machine learning algorithms to set bail amounts. Footnote 18

A study of 1.36 million pre-trial detention cases showed that a computer could predict whether a suspect would flee or re-offend even better than a human judge. Footnote 19 However, while this data seems persuasive, it is important to consider that the decisions may in fact be less just. There will always be additional facts in a particular case that may be unique and go beyond the forty or so parameters considered by the algorithm in this study which might crucially determine the outcome of the deliberation process. There is thus the inevitable need for ad infinitum improvements. Moreover, the problem of selective labelling needs to be considered: we see results only regarding sub-groups that are analysed, only regarding people who have been released. The data that we see is data garnered based on our decisions as regards who to send to pre-trial detention. The researchers themselves pointed out that judges may have a broader set of preferences than the variables that the algorithm focuses on. Footnote 20 Finally, there is the question of what we want to achieve with AI systems, what we would like to ‘optimise’: decreasing crime is an important goal, but not the only goal in criminal justice. The fairness of the procedure is equally significant.

Several European countries are using automated decision-making systems for justice administration, especially for the allocation of cases to judges, e.g., in Georgia, Poland, Serbia, and Slovakia, and to other public officials, such as enforcement officers in Serbia. Footnote 21 However, while these cases are examples of indirect automated decision-making systems, they may still significantly affect the right to a fair trial. The study ‘alGOVrithms—State of Play’ showed that none of the four countries using automated decision-making systems for case-allocation allows access to the algorithm and/or the source code. Footnote 22 Independent monitoring and auditing of automated decision-making systems is not possible, as the systems lack basic transparency. The main concern touches on how random these systems actually are, and whether they allow tinkering and can therefore be ‘fooled’. What is even more worrying is that automated decision-making systems used for court management purposes are not transparent even for the judges themselves. Footnote 23

There are several other ongoing developments touching upon courtroom decision-making. In Estonia, the Ministry of Justice is financing a team to design a robot judge which could adjudicate small claims disputes of less than €7,000. Footnote 24 In concept, the two parties will upload documents and other relevant information, and the AI will issue a decision against which an appeal with a human judge may be lodged.

Automation in prisons

New tools are used in various ways in the post-conviction stage. In prisons, AI is increasingly being used for the automation of security as well as for the rehabilitative aspect of prisonisation. A prison that houses some of China’s most high-profile criminals is reportedly installing an AI network that will be able to recognise and track every prisoner around the clock and alert guards if anything seems out of place. Footnote 25

These systems are also used to ascertain the criminogenic needs of offenders that can be changed through treatment, and to monitor interventions in sentencing procedures. Footnote 26 In Finnish prisons the training of inmates encompasses AI training algorithms. Footnote 27 The inmates help to classify and answer simple questions in user studies, e.g., reviewing pieces of content collected from social media and from around the internet. The work is supposed to benefit Vainu, the company organising the prison work, while also providing prisoners with new job-related skills that could help them successfully re-enter society after they serve their sentences. Similarly, in England and Wales, the government has announced new funding for prisoners to be trained in coding as part of a £1.2m package to help under-represented groups get into such work. Footnote 28 Some scholars are even discussing the possibility of using AI to address the solitary confinement crisis in the USA by employing smart assistants, similar to Amazon’s Alexa, as a form of ‘confinement companions’ for prisoners. While the ‘companions’ may alleviate some of the psychological stress for some prisoners, the focus on the ‘surface’ of the problem of solitary confinement conceals the debate about the aggravating harm of such confinement, Footnote 29 and actually contributes to the legitimisation of solitary confinement penal policy. The shift from the real problem seems outrageous on its own.


Qualitative Comparative Analysis

Qualitative Comparative Analysis (QCA) is a means of analysing the causal contribution of different conditions (e.g. aspects of an intervention and the wider context) to an outcome of interest. QCA starts with the documentation of the different configurations of conditions associated with each case of an observed outcome. These are then subject to a minimisation procedure that identifies the simplest set of conditions that can account all the observed outcomes, as well as their absence.

The results are typically expressed in statements expressed in ordinary language or as Boolean algebra. For example:

  • A combination of Condition A and condition B or a combination of condition C and condition D will lead to outcome E.
  • In Boolean notation this is expressed more succinctly as A*B + C*D&rarrE

QCA results are able to distinguish various complex forms of causation, including:

  • Configurations of causal conditions, not just single causes. In the example above, there are two different causal configurations, each made up of two conditions.
  • Equifinality, where there is more than one way in which an outcome can happen. In the above example, each additional configuration represents a different causal pathway
  • Causal conditions which are necessary, sufficient, both or neither, plus more complex combinations (known as INUS causes &ndash insufficient but necessary parts of a configuration that is unnecessary but sufficient), which tend to be more common in everyday life. In the example above, no one condition was sufficient or necessary. But each condition is an INUS type cause
  • Asymmetric causes &ndash where the causes of failure may not simply be the absence of the cause of success. In the example above, the configuration associated with the absence of E might have been one like this: A*B*X + C*D*X &rarre Here X condition was a sufficient and necessary blocking condition.
  • The relative influence of different individual conditions and causal configurations in a set of cases being examined. In the example above, the first configuration may have been associated with 10 cases where the outcome was E, whereas the second might have been associated with only 5 cases. Configurations can be evaluated in terms of coverage (the percentage of cases they explain) and consistency (the extent to which a configuration is always associated with a given outcome).

QCA is able to use relatively small and simple data sets. There is no requirement to have enough cases to achieve statistical significance, although ideally there should be enough cases to potentially exhibit all the possible configurations. The latter depends on the numbers of conditions present. In a recent survey of QCA uses the median number of cases was 22 and the median number of conditions was 6. For each case the presence or absence of a condition is recorded using nominal data i.e. a 1 or 0. More sophisticated forms of QCA allow the use of &ldquofuzzy sets&rdquo i.e. where a condition may be partly present or partly absent, represented by value of 0.8 or 0.2 for example. Or there may be more than one kind of presence, represented by values of 0,1,2 or more for example. Data for a QCA analysis is collated in a simple matrix form, where rows = cases and columns = conditions, with the rightmost column listing the associated outcome for each case, also described in binary form.

QCA is a theory driven approach, in that the choice of conditions being examined needs to be driven by a prior theory about what matters. The list of conditions may also be revised in the light of the results of the QCA analysis, if some configurations are still shown as being associated with a mixture of outcomes. The coding of the presence/absence of a condition also requires an explicit view of that condition and when and where it can be considered present. Dichotomisation of quantitative measures about the incidence of a condition also needs to be carried out with an explicit rationale, and not on an arbitrary basis.

Although QCA was originally developed by Charles Ragin some decades ago it is only in the last decade that its use has become more common amongst evaluators. Articles on its use have appeared in Evaluation and the American Journal of Evaluation. There is now a website dedicated to hosting resources on its use: Compass

Example

For a worked example, see Charles Ragin&rsquos What is Qualitative Comparative Analysis (QCA)?, slides 6 to 15 on The bare-bones basics of crisp-set QCA.

[A crude summary of the example is presented here]

In his presentation Ragin provides data on 65 countries and their reactions to austerity measures imposed by the IMF. This has been condensed into a Truth Table (shown below), which shows all possible configurations of four different conditions that were thought to affect countries&rsquo responses: the presence or absence of severe austerity, prior mobilisation, corrupt government, rapid price rises. Next to each configuration is data on the outcome associated with that configuration &ndash the numbers of countries experiencing mass protest or not. There are 16 configurations in all, one per row. The rightmost column describes the consistency of each configuration: whether all cases with that configuration have one type of outcome, or a mixed outcome (i.e. some protests and some no protests). Notice that there are also some configurations with no known cases.

Ragin&rsquos next step is to improve the consistency of the configurations with mixed consistency. This is done either by rejecting cases within an inconsistent configuration because they are outliers (with exceptional circumstances unlikely to be repeated elsewhere) or by introducing an additional condition (column) which distinguishes between those configurations which did lead to protest and those which did not. In this example, a new condition was introduced that removed the inconsistency, which was described as &ldquonot having a repressive regime&rdquo.

The next step involves reducing the number of configurations needed to explain all the outcomes, known as minimisation. Because this is a time consuming process, this is done by an automated algorithm (aka a computer program) This algorithm takes two configurations at a time and examines if they have the same outcome. If so, and if their configurations are only different in respect to one condition this is deemed to not be an important casual factor and the two configurations are collapsed into one. This process of comparisons is continued, looking all configurations, including newly collapsed ones, until no further reductions are possible.

[Jumping a few more specific steps] The final result from the minimisation of the above truth table is this configuration:

The expression indicates that IMF protest erupts when severe austerity (SA) is combined with either (1) rapid price increases (PR) or (2) the combination of prior mobilization (PM), government corruption (GC), and non-repressive regime (NR).


For an artificial agent to assume a real social role and establish a meaningful relationship with a human, it would need to have a psychological, cultural, social and emotional profile. Current machine learning methods do not allow for such a development. Tomorrow's robots will be our humble assistants, nothing more.

We live in a time when robots clean our houses, drive our vehicles, disable bombs, provide prosthetic limbs, support surgical procedures, manufacture products, entertain, teach and surprise us. Just as smartphones and social media are offering a connectivity beyond anything we imagined, robots are beginning to offer physical capabilities and artificial intelligence (AI), cognitive abilities beyond our expectations. Together, these technologies could be harnessed to help solve important challenges, such as ageing societies, environmental threats and global conflict.

What will a day in our lives look like, in this not-so-distant future? Science fiction has explored these possibilities for centuries. Our lives will likely be longer: with synthetic organs to replace defective parts of our bodies, nanosized medical interventions allowing the precise targeting of diseases and genetics, and autonomous vehicles reducing fatalities in traffic. Our jobs will change dramatically. Certain jobs will not exist anymore and new jobs will emerge – in the development of robot service apps, for instance, that could run on available robot platforms in our homes. The way we are educated will also change radically – our senses and brains may be artificially enhanced, and our ability to reflect on new insights gained from the automated analysis of vast amounts of data will require a different treatment of information in schools.

But how will we relate to each other in a civilization that includes robots? In what way will we meet each other, have relationships and raise our children? To what extent will robots and humans merge?

Many of us wonder whether AI will become so intelligent and capable in human communication that the boundaries between human and artificial beings will blur. If it is possible to communicate in a natural way and build a meaningful interaction over time with an artificial agent, will there still be a divide in the relationships we have with people and technology? Also, once our human bodies and minds are enhanced with AI and robotics, what will it mean to be “human”?

Smart tricks

From an engineering perspective, these advanced capabilities are still very far away. A number of hurdles need to be overcome. For now, robots and computers are completely dependent on a power source – they require a lot of electricity, and this complicates integrating robotic elements with human organic tissue. Another hurdle is the intricacy of human communication. While a one-off natural language conversation in a specific context with a robot can feel realistic, engaging people verbally and non-verbally over many conversations and contexts is quite another matter.

For example, when you call an artificial lost-and-found agent at an airport, a satisfying conversation is possible because there are only a limited number of goals the caller has. However, in creating a more extended relationship, for example, with a robotic pet, a much more complicated model must be developed. The robot needs to have internal goals, an extensive memory that relates experiences to various contexts, and it needs to develop these capabilities over time.

Through smart “tricks”, a robot can seem more intelligent and capable than it is – by introducing random behaviours which make the robotic pet interesting for longer, for instance. Humans have the tendency to “make sense” of the robot’s behaviours in a human way (we do this with animals too). However, in order to sustain a meaningful relationship which deepens and evolves over time, an extensive artificial inner life will need to be created.

How machines learn

A major hurdle in creating this rich artificial inner life is the way machines learn. Machine learning is example-based. We feed the computer examples of the phenomenon we want it to understand – for instance, when people feel comfortable. In teaching a machine to recognize this, data of people being comfortable is provided – this could be in the form of images, videos, their speech, heartbeat, social media entries, etc. When we feed videos to a computer, these are labelled with information on whether the people in it are comfortable or not – this may be done by experts in psychology, or in the local culture.

The computer uses machine learning to “reason” from these labelled videos to identify important features that correlate with feeling comfortable. This could be the body pose of a person, the pitch of their voice, etc.

Once the machine has identified the features predicting “comfort”, the resulting algorithm can be trained and improved, using different sets of videos. Eventually, the algorithm is robust and a computer with a camera can recognize how people feel with high, if not 100 per cent, accuracy.

Now that we understand roughly how machines learn, why is that a hurdle in creating a compelling inner life for an artificial agent to realize a seamless integration with humans?

Towards a complex synthetic profile

In order to develop an artificial agent that can have a sustained relationship, over a long period of time, with a person, we need the agent to have a compelling personality and behaviours, understand the person, the situation in which they are both in, and the history of their communication. More importantly, the agent would have to keep the communication going across a variety of topics and situations. It is possible to make a compelling agent, such as Amazon’s Alexa or Apple’s Siri, that you can speak to in natural language and have a meaningful interaction with, within the specific context of its use – set the alarm clock, make a note, turn down the heating, etc.

However, beyond that context of use, the communication quickly breaks down. The agent will find acceptable responses for a large variety of questions and comments, but will not be able to sustain an hour-long discussion about a complex issue. For instance, when parents discuss how to respond to their child not working hard at school, the conversation is very rich – they bring to it their understanding of the child, and their own personalities, emotions, history, socio-economic and cultural backgrounds, psychology, genetic make-up, behavioural habits and understanding of the world.

In order for an artificial agent to take on such a meaningful social role and develop a real relationship with a person, it would need to have a synthetic psychological, cultural, social and emotional profile. Also, the agent would need to learn over time how it “feels” and respond to situations in relation to this synthetic internal make-up.

This requires a fundamentally different approach to current machine learning. An artificially intelligent system that develops much like how the human brain develops, and that can internalize the richness of human experiences, is needed. The intricate ways people communicate with each other and understand the world is an unimaginably complex process to synthesize. The envisioned and currently available models of AI are inspired by the human brain or have elements of how the brain works, but are not yet plausible models of the human brain.

We already see AI achieving amazing feats – like reading the entire internet, winning at Go, the ancient Chinese board game, or running a fully automated factory. However, just like the English physicist Stephen Hawking (1942-2018) said he had only scratched the surface of understanding the universe, we are still merely scratching the surface of understanding human intelligence.

It won’t happen tomorrow

Robots and artificially intelligent systems will be able to offer us unique abilities to support and enhance our decision-making, understanding of situations and ways to act. Robots will be able to contribute to or autonomously carry out labour. Perhaps robotics will be fully physically integrated in our human bodies once a number of challenges are overcome. Also, we will relate to artificial agents as we do to humans – by communicating with them in natural language, observing their behaviours and understanding their intentions. However, in order to sustain a meaningful relationship with conversations and rituals, which deepen and evolve over time in the rich context of everyday life, as is the case between people, an extensive artificial inner life will need to be created. As long as we replicate or surpass certain functions of human intelligence rather than the holistic whole of human intelligence placed in the rich context of our everyday lives, it is unlikely that artificial agents and people can be totally integrated.


Organizational Behavior and Human Decision Processes

Organizational Behavior and Human Decision Processes publishes fundamental research in organizational behavior, organizational psychology, and human cognition, judgment, and decision-making. The journal features articles that present original empirical research, theory development, meta-analysis, and.

Organizational Behavior and Human Decision Processes publishes fundamental research in organizational behavior, organizational psychology, and human cognition, judgment, and decision-making. The journal features articles that present original empirical research, theory development, meta-analysis, and methodological advancements relevant to the substantive domains served by the journal. Topics covered by the journal include perception, cognition, judgment, attitudes, emotion, well-being, motivation, choice, and performance. We are interested in articles that investigate these topics as they pertain to individuals, dyads, groups, and other social collectives. For each topic, we place a premium on articles that make fundamental and substantial contributions to understanding psychological processes relevant to human attitudes, cognitions, and behavior in organizations.

In order to be considered for publication in OBHDP a manuscript has to include the following:

  1. Demonstrate an interesting behavioral/psychological phenomenon
  2. Make a significant theoretical and empirical contribution to the existing literature
  3. Identify and test the underlying psychological mechanism for the newly discovered behavioral/psychological phenomenon
  4. Have practical implications in organizational context

Benefits to authors
We also provide many author benefits, such as free PDFs, a liberal copyright policy, special discounts on Elsevier publications and much more. Please click here for more information on our author services.

Please see our Guide for Authors for information on article submission. If you require any further information or help, please visit our Support Center


The U.S. Army Is Getting Ready for the Robot Wars of the Future

These robots were designed with technical standards intended to enable rapid integration of any sensor or upgrade so that it can receive software updates and new weapons as new threats emerge over time.

Robots fired machine guns, grenade launchers and anti-tank missiles at a range of targets during a Robotic Combat Vehicle live-fire warfare preparation exercise, a key step toward integrating a new fleet of armed ground drones to support future Army combat operations. The addition of these kinds of robots, with humans of course maintaining decision-making authority regarding the use of lethal force, continues to greatly reshape Army tactics, maneuver formations and cross-domain combat operations.

“We actually shot live bullets off of these robots. It's really exciting what we've proven out thus far, not only are the robots working extremely well, the payloads are accurate and effective,” Maj. Gen. Ross Coffman, the director of the Next Generation Cross-Functional Team for Army Futures Command, told the National Interest. The live-fire exercises were with the Robot Combat Vehicle Light and Robotic Combat Vehicle Medium, two interrelated, yet distinct Army robot efforts intended to introduce new levels of autonomy, weapons attack and surveillance for ground troops advancing to contact with an enemy. During the live-fire tests, the sensor, payload and weapons integration performance exceeded expectations, Coffman explained.

“We are seeing an increased stability or increased range that we didn't think was possible previously,” he added.

The robots were designed with technical standards intended to enable rapid integration of any sensor or upgrade so that it can receive software updates and new weapons as new threats emerge over time.

“Really anything is possible because the great people that helped us with requirements were very clear. They wanted these robots to be payload agnostic, and because you can plug and play different things on top of the robot, the interface between the payload and the robot has proven very capable,” Coffman said.

Coffman explained that due to advances in cross-domain networking and hardened connectivity, command and control for the robots could be performed from dismounted soldiers, larger manned vehicles or even air platforms navigating, directing and controlling the robots.

With the intent of optimizing manned-unmanned teaming possibilities, the robots are engineered with advanced, artificial-intelligence-enabled computer algorithms intended to enable progressively expanding degrees of autonomy. More and more, robotic sensors can perform tasks independent of human intervention such as navigational functions, sensing, networking and data analysis. Coffman explained that through systems such as aided target recognition, robots can themselves find, identify and acquire targets and perform autonomous obstacle avoidance exercises, but still benefit greatly from humans operating in a command and control capacity.

“For target acquisition, that's the payload and then if you talk about autonomous behavior, for the robot itself, like right now we know we can execute waypoint navigation, we can have teleoperation and we can do obstacle avoidance. And we're really making huge strides on additional autonomous behaviors in the missions they do,” Coffman said.

Coffman explained that the plan was to use the robots in tactical missions with soldiers to inform senior service leaders about when they may become a program of record.

“We're going to take those, those four lights, those four mediums, and then the four surrogates that we built for last year's robotic soldier experiment, and we are going to form them into a company,” he explained.

Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks.

He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Master's Degree in Comparative Literature from Columbia University.


Watch the video: Writing a Comparative Analysis: Sheet Explained (June 2022).


Comments:

  1. Fenrikazahn

    I can't remember.

  2. Alonso

    I fully share your opinion. There is something in this and I think this is a good idea. I agree with you.

  3. Russel

    Great idea and timely

  4. Lazar

    Well, so-so...

  5. Laureano

    You are wrong. I can prove it. Write to me in PM, we will handle it.



Write a message