Classifying Policing Social Machines

tags-social-machinesLatest attempt to classify Policing Social Machines: using a Web Science approach.

Some readers (ok, let’s be honest and say “reviewers”) seem to have been confused as to why I would do this – it’s obvious I think! If we understand, even very roughly what ways there are of addressing crime using technologies, and how these have previously failed or where the problems lie, then we can do it better! If you look at the lifetime of apps or websites that are supposed to help victims, or forums to provide support- it’s often depressing to see last comments from three years ago, or a very limited geographic audience. We have to understand why this is so. Why do the techno-optimists get it wrong? Does technology really get “subverted” or “hacked” or is each iteration of use perfectly justifiable, if technology is amoral? Governments and large organisations are asking these questions; if they can, so can I. They haven’t answered themselves very well yet, either, I hope by looking at these things in a new way, I might.

1.     INTRODUCTION

This analysis suggests social, technical and policy issues arising from the dissemination of “open” crime data alongside other sorts of official and unofficial crime data, or information about crime. We discuss the use of crime data within crime apps and sites, and specifically, within police.uk, a site created by the U.K. government that allows the public to discover more about crime.

We present a classification, casting further light on these apps, with a consideration of how the unique characteristics of the web influence our methodology for the classification. We suggest that the classification starts to answer social, technical and policy issues, including those coming from police.uk. We then see (a) how crime/police data is being used, (b) how data in this area can be crowdsourced, (c) whether such data and apps can help to address crime, or whether they increase fear of crime.

1.1 U.K. Open Crime Data: Social, Technical and Policy Issues

Crime in the U.K. occupies the media and our T.V. schedules; it fills our fiction shelves and is a large part of public spending, whether via warfare and defence, or policing. A recent report suggests that the amount spent on combating only violent crime equates to 7.7% of the U.K.’s G.D.P., or £4,700 for every household. [29] The totality of expenditure is far more, especially as other policy sectors such as education, welfare and the N.H.S. also spend money on addressing crime within their remit.

Therefore an exploration of crime data, and the apps that produce or represent this data that tells us more about the current state of crime is of interest. A key site in the U.K., mediating a set of crime data, police.uk has been created by the U.K. government.  Police.uk allows the public to find out more about crime in their area, and uses open crime data (data that can be freely used, reused and redistributed by anyone). We use this site as a well-documented official exemplar of how technology and crime data come together, although it is not our only focus.

Our first approach to exploring the crime data ecosystem was to examine how crime data relating to police.uk moves between and is mediated by people and technologies. However, when we saw the myriad ways in which crime data is available on the web, and transformed by it, especially by the web’s iterations and evolutions, it became apparent that protean  manifestations of both crime, society and the web mean that representations of crime (i.e. data) occur in many non-simple forms, and that this is made more complex by the nuances of the web; its infrastructure, interstices; and its populations of burgeoning and dying communities – in short – the masses of  free-formed and evolving technologies and social practices that combat crime.

Given these factors we thought it worth finding out whether such an exploration of crime data could be more usefully carried out by combining Web Science with the concept of “social machines”, in this instance “policing social machines”. Social machines are first mentioned in the context of people and technology by Professor Sir Tim Berners-Lee, in “Weaving the Web.” [1]

“Real life is and must be full of all kinds of social constraint – the very processes from which “society” arises. Computers help if I use them to create abstract social machines on the Web; processes in which the people do the creative work and the machine does the administration.”

“More than 65 years after the first “Turing Machine” was described by Alan Turing as a “simple yet abstract computational device that enabled investigation of what can be computed”, we are now witnessing and embracing new kinds of systems. These are governed not purely by computational processes, but also by collective, harnessed and focused social interaction between humans using our computing milieu.” Commonly used examples of social machine are: WikipediaFacebook, the DARPA Balloon Challenge, Reddit and Zooniverse.

The concept helps us to unpick some of the ways in which humans come together to use technology to solve problems. Police.uk is an example of a social machine; it uses the web to administratively present data that has been recorded by the police on crimes reported to them by the public. The data allows people to understand more about crime, policing and justice. We consider how to examine current assumptions and categorise the ways in which some of the current information about crime, and crime data emerges in the context of the interplay between technology and society. Current research into social machines [24] explores issues coming from technology platforms for crowdsourcing knowledge and how we might define the various concepts that are pulled into such attempts. Earlier work [2] introduced some social, moral, policy and technological issues with our current crime data, especially when it is reproduced within the Transparency and Accountability context as “open” crime data. These problems have made apparent the need for a classification of crime technology, or policing social machines, as part of ongoing research into how the Web helps us to address crime, and how, in the U.K., the site police.uk, that publishes open crime data as part of the transparency and accountability programme , contributes to this.

Police.uk underpins the U.K. Government’s policing and justice reform agenda; through it the public can hold their local police to account, with this accountability currently mediated by Police and Crime Commissioners. Crime data coming from forty-three U.K. police forces is represented on maps; inputting a postcode, name of a town, village or street takes the visitor to the crime map that represents the data geographically, or they can download the open data that mediates the maps from data.police.uk.

There has been policy discussion about whether Home Office should be presenting open crime data in this way at a large cost. Should they produce the data, and leave app developers to map it, via crime apps such as http://www.Ukcrimestats .com/, as some information economy proponents might argue? [18] In order to answer this, we might ask whether police.uk is in fact doing the same thing as the other crime apps that it appears to be in competition with whether they use open data or not.  The classification suggests some ways of approaching this question.

Another problem is of social, institutional and media perception of what this and related data actually represents. The Home Office “smoothes” data that comes in from the different police forces in order for it to appear as open data; it has to be anonymised, aggregated and made to comply with U.K. and E.U. Data Protection directives; however this open data is coming from the same sources that produce the official crime data for the U.K..[26]  While open crime data is acknowledged as being less informative than official policing data, it is interesting that debate about the nature of crime statistics in general has emerged, following the public release of this smoothed open data, and conversations about policing data in general are now taking place in the context of transparency. It is this police recorded data that has caused an enquiry to be made into the nature of crime statistics in general, with worries about why police recorded crime figures are far lower than the British Crime Survey’s figures suggest. Following the Public Administration Select Committee’s sessions on crime statistics,[20] John Flatley of the Office for National Statistics has said that “the data can’t tell us why the police appear to be recording a lower proportion of crime reported to them than in previous years…one suggestion is that there has been gradual erosion of compliance by the police with the NCRS and ONS outlines some possible drivers, including possible perverse incentives associated with performance targets.” The U.K. Statistical Authority has now removed the “kitemark” of National Statistics. The question remains though, of whether the transparency and accountability program has caused this healthy debate, and further, whether this recorded data is not untrustworthy crime data, but respectable, statistically valid policing performance data, telling us about policing rather than crime. If so, how then do we find “crime data”? The classification helps also to address this, by considering the provenance of various crime data sets in currency.

There are wider contexts too. What of crime data that we ourselves generate but that is collected without our knowledge? We post about ourselves on Facebook, Foursquare and Twitter and leave a trail of our movements via our mobile phones. Large corporations use this data and sell it on to advertisers for commercial purposes, while the police and intelligence agencies use this Open Source Intelligence, (OSINT) with no need for consent. There are the greyer areas relating to the gathering of crime data revealed by the Snowden releases, while of course, there is the use of the web for “dark” crime social machines, such as those mediated by the Silk Road and some users of Tor. There is not enough space to discuss the dark crime social machines, and our focus is on policing social machines, but these provide context to some of these issues which the classification helps to unpick.

2.     POLICING SOCIAL MACHINES

2.1 What Lies Beneath Police.uk

Police.uk appears to be like other apps that offer mapped crime data, for example, www.Ukcrimestats .com/, www.crime-statistics.co.uk/ and www.crimereports.co.uk/. However, preliminary analyses of these sites show that although they appear similar, they are very different if we use Network Science and linguistic approaches to analysis of their structure and content.We see that it is thus important to think about social, economic and psychological factors, if we are to understand how to design sites or apps that have similar intentions behind them, or how to understand their effects, criminologically for example. Examining these sites using network science measures such as eigenvectors and betweenness centrality, shows that because of the unique characteristics of the web, two visually identical web-mediated maps with the same crimes appearing on them can be very different. If we examine the use of language in these sites (particularly the discourse of risk and fear) and those they link to, and which contextual ads appear, and if we think about their economics, we find some interesting conclusions. While police.uk and Crimereports link to other sites that are informative and reassure, a site such as Ukcrimestats appears to be linking to sites that promote “fear of crime” and that sell security in order to help people feel safe. This suggests that a multidisciplinary approach to further understanding how police.uk and its companion sites are positioned in the world of networked crime apps on the Web could help us to better understand what we are looking at.

2.2 Related Work

An earlier social machine classification had been carried out, using examples of “health social machines”. In “The Crowd Keeps Me in Shape: social psychology and the present and future of health social machines,” [11] Van Kleek et al. provide a classification of health social machines. They used grounded theory, and clustered examples of health social machines into behavioural intervention, disease management, and collective sensemaking. Behavioural intervention machines “are systems that seek to help individuals achieve certain health-related goals by altering their daily routine(s) and activities”. These could be device-based, such as the Nike FuelBand, or app-based such as Fitness Pro or site-based such as Fitocracy. A second class of machines “aimed to help individuals cope with various kinds of conditions, including illness, disease, and mental health.” These included BigWhiteWall, a site that allows patients to cope with mental health issues such as depression. The third class they found aims “to crowdsource knowledge about disease, symptoms, treatments, and available resources to individuals who have personally experienced them.” Collective sensemaking allows large scale aggregation of symptoms of diseases and crowdsourced intelligence on self-diagnosis and treatment. These included sites like PatientsLikeMe. Having seen some interesting points about devices and crowdsourced treatments emerging from these clusters, it was decided to see whether the same classifications could be applied to crime technologies.

 

3.     THE CLASSIFICATION

3.1     Method

Although Grounded Theory emerged in the 1960s; it is firmly underpinned by earlier writings on philosophy of science, Wittgenstein’s family resemblances, American pragmatism, symbolic interactionism, Kantianism, Mill’s “system of differences,” Baconian inductivism and Aristotelian axiology. In all of these we find an understanding that there is no such thing as “raw data” or even “raw theory”; in grounded theory, the researcher is not expected to come to their classification as though to a tabula rasa; rather it is accepted that they know what they know, and that the act of classifying affects the researcher’s perception of the things being classified.

This is a nuanced theoretical approach when it comes to examining web artefacts, as it allows the theorist to revise their findings as they move through the discovery process; improving on the logico-deductive approach which deals badly with non-static entities. Two points emerged here relating to research taking place on the web: one is that the process of undertaking research on the responsive web, can affect the thing being researched, for example in examining police.uk, talks to site designers inevitably had an effect on the site itself as they were consciously or unconsciously influenced by questions about design, intent, competition, data provenance and policy. Therefore a methodological approach is required which accepts some interplay between the observer and the thing observed, without this making the results of observation invalid. “Hence the reactive impact that investigators have upon their data bears more on the scope than on the credibility of an emerging theory. The technique that forces investigators to stay close to their data, and which constitutes the systemisation of the approach, is the constant comparative method.”[22]  The second is that of reproducibility. Mainstream science has long looked for reproducibility in results as being fundamental to the scientific method; where we carry out web searches with results returned via Google, data returned may not be the same from moment to moment, depending on data centre locations, indexing, and the constant addition of new links to the web that then may alter search results.[9][28] Grounded theory allows this; the focus is on creating a methodology that lets researchers apply it or alter it themselves, as it fits their needs. “The aim of this research method is building theory, not testing theory”. [19] Such an approach is pragmatic and very much suits the fluidity of data results coming from the web.

Given these factors, we decided to keep Van Kleek’s grounded theory methodology, and continue a “running theoretical discussion, using conceptual categories and their properties”, [7].  We would use the categorisations found by Van Kleek et al., although with an initial scepticism as to whether crime social machines could be classified alongside health social machines – health and crime seeming to map two very different sets of phenomena. We began by collecting examples of technology that addresses crime, and then by applying the three distinctions to them. As we made these distinctions, these affected the collection method, by clarifying what we thought we were looking for, so that the collection and the classification fed into one another.

To seed the searches, we used Google Alerts, the major search engines and meta-search engines such as www.DuckDuckGo.com and searched through blogs and news-sites featuring crime, crime prevention and crime apps. The initial search was “social machines crime, crowdsourcing crime, sensemaking crime, collective sensemaking crime, collective intelligence crime, human computation crime, crowdsourcing crime, crime statistics, police statistics”. Defining search terms made it further evident that there was need for empirical or a posteriori investigation rather than using an a priori definition of social machines, as what we referred to as “social machines” had previously been referred to via terms such as “crowdsourcing”, “crowd based computation” and “collective sensemaking”.  On the other hand, Google Ngrams shows the term “social machine” appearing in 1818[1].

 

3.2     The Clusters

The question that we first asked was whether the results that we got back from our initial searches could fit into the classification clusters identified by the health research of behavioural intervention, management, and collective sensemaking. The results were diverse: there were many sites to do with reporting crime, few of which seemed long-lived, there were numerous discussions on forums to do with crime, whether these forums were professional or not, and then a large number of accounts of the ways in which police and security agencies were attempting to address crime, in some cases using devices or surveillance.

Following the earlier work on health social machines, it seemed logical to use technologies and mediative elements as clustering factors. However, as we applied these to our data we found that we should indicate the process that was occurring socially. So for example, we could have a Facebook platform that allowed people to spread information about potential sightings of a missing person. This differed in application to police use of Facebook to verify an informant’s persona, in order to weight the information they provide. So Facebook on its own would not provide enough information to distinguish between its use as a platform and its use as an information weighting mechanism. We needed to work out: 1) what social element of addressing crime was occurring, 2) which technology was being used and 3) what the technical or computational process was.

A second sweep of the sites made us consider how they fitted into the “stages” of how society addresses crime, from preventing crime to reporting on crime, to managing and collecting information about crime and making judgements. These categories then helped us to start clustering the data (sites and apps), and grouping it into sites that provide the public with general information, sites that provide more up to the minute information on where crimes are being committed, sites that allow the gathering of decentralised data for professionals and sites that allow the public to crowdsource particular problems. We also found a conceptual divide between peer-to-peer use of technology and specialised sites for professional use only.

3.3     How crime is addressed

After further examination of the sites and apps the searches had returned we arrived at three sets of classifying dimensions: the first characterised how crime is addressed, spanning overall knowledge of crime, reporting on crime, risk evaluation and crime prevention, to “solving” crime, with solving or providing evidence flowing into making judgments on criminal or deviant behaviours, which judgments then fed into the cycle of reporting on crime and knowledge of crime.

 

3.4     Technologies

Along the second dimension we considered technologies. We started by defining technology as mechanisms mediating between the human and the world. We began with chemical mediators – anti-depressants (mediating human perception of the world and  possibly helping reduce fear and aggression), and chemical castration to prevent crime or deviance, f.M.R.I. scans (to detect intention), sensors, cameras, glasses, and devices that “augmediate” perception, tablets, phones, laptops, P.C.s, mainframes, networked systems, through to environmental and building architectures, and finally, laws. We already see a society where technological structures mediate socio-political-legal mechanisms – on the web censorship prevents people from accessing illegal sites for example. From a Web Science perspective our goal was to contextualise web-based technologies emerging from this wider trawl.

 

3.5     Mediating Processes

Along the third dimension we projected that we could indicate how crime is mediated by the technology under discussion. We could use A.I. terminology to decide whether the system involved “sensing”, perception, reasoning, knowledge, planning, learning, communication, or other forms of interaction. As we applied our clusters to the data we had and considered the implications we added more dimensions, discussed in the conclusion.

Using these dimensions would allow us then to sift through our initial searches, and start organising them in ways that would help to logically think about distinguishing characteristics of successful or unsuccessful sites. The grounded theory approach means that these are not set as permanent definitions, but pragmatic ways of slicing through concepts spanning both society and technology that pull out the most salient factors to consider when building further sites or apps and considering return on investment.

4 APPLYING CLUSTERS TO THE DATA

We found four overall groups: Knowledge and Report Groups, Behaviour Intervention, Crime Management Sites and Sensemaking. Van Kleek’s definitions had worked to a point, but we felt that we needed a further category that allowed us to add sites and forums where professionals discuss crime, and that seemed to serve “knowledge of crime at a distance”. This sat well with police.uk and Crimereports. These move towards “knowledge of crime right now” which came into apps that allow members of the public to map crime as it occurs nearby, or to them. We felt that these fitted into the behavioural intervention group of the health machines classification, as they, if working well, allow members of the public to modify their behavior if it seems that they might be about to walk down a path where rapes occur or drive into a dangerous part of town. We found that the boundaries between these groups were not clear cut – which did make us question the categories we had formed, although an answer is that perhaps our society’s concern with crime means that the myriad ways in which technology addresses it are evenly distributed. Knowledge of crime as seen on police.uk dissolves into knowledge of crime as seen on harrassmap, with perhaps only the viewer’s visceral response to the information, their “fear of crime” as opposed to their “knowledge of crime” as a differentiating factor. Behavioural intervention dissolved into crime management as we see knowledge of crime coming in from the public administratively focused, with perhaps a degree of comfort and security attached to the public’s trust in sites like Crimestoppers – well-known and well-used. This administrative management of crime dissolved into allowing the public to work on crime information themselves – where the dimensions move from public intelligence gathering to public use of such intelligence to solve crimes. Such public intelligence gathering could be witting or unwitting, so we then felt this category moved towards surveillance, from public self-surveillance to surveillance by policing organisations, via OSINT, or devices.

 

4.1 Knowledge Groups

These were mostly insider forums where crime professionals swap tips, and provide professional support and awareness. They feature discussion on policy initiatives and those in higher authority, while the prevalent discourse was professional and showed domain appropriation. There were also forums where professionals whose work overlaps with crime or criminological concerns exchange advice. For example the Professional Pilots Rumour Network has discusses combating terrorism, using plane-spotters to detect unusual activity, whether 9/11 was a conspiracy and problems with sensationalist reporting on air crime. It was here that we started to see a polarity between discussion and action, or intervention sites. Where the focus was on knowledge, we also included sites that provide reports on crime, using highly processed data: so police.uk and Crimereports fitted in here.

If we compare these to www.Ukcrimestats .com we see they appear to offer information about crime using crime open data, presented geographically. However the latter appeared to be conceptually mediated by “risk or fear” of crime, while police.uk informs about crime and offers advice and assurance on prevention.

 

For example, http://www.police.uk/hampshire/2FG02/crime provides advice about safety across a number of dimensions.

Table 1. Examples of policing social machine clusters

Knowledge Groups

International Consortium of Investigative Journalists, NPIA, POLKA

police.uk (http://www.police.uk/)

Crime Reports (https://www.crimereports.co.uk/)

UK Crimestats

(http://www.Ukcrimestats .com/)

Behavioural Intervention

Collabmap, Harrassmap, Hatari, Postacrime

Crime Management

Crimestoppers,  Neighbourhood Watch, CEOP

Sensemaking

Surveillance

GetYourCarBack, The Search for Jim Gray,

Reddit Boston Bombing, Find Sunil Tripathi,

Facewatch, Helpfromhome,

Algorithm for Matching Latent Fingerprints

Internet Eyes,  Blueservo, Palantir, Domain Awareness System, PRISM,

IOT, RespectProject.eu, Senseable, Drones, Automatic License Plate Readers

On the other hand, Ukcrimestats links to sensationalist articles on crime, and adverts are served up in response to what advertisers “see” on the page. Many of these adverts are themselves risk and fear-based, further enhancing the difference between the Home Office site and Ukcrimestats. On their media page there are links to articles with language such as “reckless,” “staggering,” “fear,” “suffer,” “most violent roads,” “crime-ridden streets,” “hellhole street,” “blighted”, “chilling.” The difference emerges on examination of the links themselves, which seem to underline the idea that fear of crime is being used to promote the site. If a site is known by the company it keeps, these sites offer two very different understandings of crime and society form a network science view.

 

4.2 Behavioural Intervention

It was when trying to discover how, if in any respect, police.uk differed from similar sites and looking at Ukcrimestats that it became evident that there was a cluster of apps that fell within the remit of discourses on risk and that were more “interventionist”. These sites are one remove from the “knowledge of crime” sites. Where the knowledge / crime data sites seem to be about reassurance, the risk apps often helped to collectively source decentralised knowledge of crime happening “right now”.  Risk sites both report crime and can modify behaviour – they inform the crowd about potential risk areas, and provide a mechanism for reporting crimes of varying degrees of seriousness as they occur, which then can have the effect of modifying users’ behaviour, by stopping them from walking in “unsafe” areas for example. They may target particular strata of society such as women at risk of violence, but are available (to those who can access the technology) globally. There can be a peer-to-peer element to these; alternatively, the sense that those in authority are providing knowledge and advice to the less-informed. There was a plenitude of machines designed to address particular sorts of problem, from domestic abuse, car stealing, drug abuse, to cybercrime and cyberbullying. These offer, as with the health social machines, general knowledge resources, such as places to get help, online advice, what to do to avoid or prevent attacks, activities to support safety and general support; again, as with the health machines, intervention techniques, counselling and advice. There is a striking similarity between these and the health social machines.

4.3 Crime management

Another class of crime social machines aims to help crime professionals and in some cases, the public, to manage crime. These are again “tips” based but funnelled or centralised. These dissolve into “crime management”  moving away from the behavioural intervention grouping, as knowledge is flowing from crime “amateurs” to the crime professionals, and while they mitigate risk, this is absorbed by the language of professionals so that there is less “language of fear”.

As with the health machines, a dimension along which both risk apps and crime management systems vary considerably is the degree to which these machines encourage participant anonymity or identity disclosure. Some sites encourage individuals to use their offline identities, or their normalised online identities either explicitly or through implicit disclosure, such as police use of platforms such as Facebook. Where this disclosure is implicit and people providing information are not aware of what mechanisms there are in place for evaluating the strength of the data they provide, it is on record that getting reports about crime from people using a Facebook sign-on allows the police to evaluate the information, through assessing the reliability of the information provider. For example, the U.K. Crimestoppers site says, “Crimestoppers’ promise of anonymity has never been broken. If the identity of one of our callers was made known it would destroy trust in Crimestoppers and no one would contact us. This is another reason why it is so important to us that we can guarantee your anonymity.” It is notable that they do not specify who is promising exactly what to whom. The more effort an institution makes to convince the public that it is trustworthy, the more one might question whether this is so – i.e. trust is established by looking at behaviour, not promises.

4.5 Collective Sensemaking

Another group of crime social machines that parallels the health social machines classification is where collective sensemaking occurs. We found two distinct categories; one being the collective approach to solving a crime, “social solving” or “Crowdsolving”, and the other being the crowd making judgments on how to deal with crime, “social judgment machines”. We then postulated a third variety of sensemaking, which was of official surveillance of web-(and other device) mediated crowd behaviours.

The three fields in which sensemaking are commonly used are HCI, information science and organisational studies. In this instance we are looking at groups that enable people to collectively solve problems. Klein et al. [13] presented a theory of sensemaking as a set of processes invoking causal reasoning, hypothesising, feedback and learning. There is reference to Minsky’s early work on frames, with feedback and re-hypothesising contributing to the re-framing of ideas. This also picks up on attribution theory [8] and the naïve scientist hypothesis, with an iterative layer added to the notion that we observe phenomena and then attempt to make sense of them via hypothesising, “with the important aspect being that neither data nor frame comes first; data evoke frames and frames select and connect data. When there is no adequate fit, the data may be reconsidered or an existing frame may be revised.” [12] As with the health social machines, these examples crowdsource knowledge of crime and in particular how to “solve” crime.  Crowdsourcing knowledge of crime, perhaps invokes a temptation to relate experiences of crime as “symptoms”,  discussion of “treatments”, and resources for individuals who have experienced these “symptoms”. However most forum discussion does not always end in agreement on causes of crime, and therefore its correct “treatment”.

For example the site www.liveleak.com shows murders, assaults, car crashes, and other disasters, some of which are criminal, with discussion from commenters. There is little evidence of web vigilantism, or “digilantism” resulting with people agreeing on a treatment for a crime. However other sites specifically cater for the “digilante”, and platforms such as Twitter, Facebook and online news sites such as Reddit, Gawker, Jezebel and theweek.com have been used to “out” individuals such as Violentacrez, @comfortablysmug, racist teens, Lindsey Stone and Hunter Moore. The generic treatment for the deviant / criminal behaviour exhibited in these cases is online exposure or “doxing” and shaming, a behaviour that goes back a long time in history. Daniel Solove suggests that shaming occurs in inverse proportion to the absence of judicial punishment, seen as “extra-judicial” punishment. [27] This aggregation of opinion, at large scale, relates crime/deviance to treatments and effects, some of which threaten individuals’ right to privacy, as Jonathon Zittrain has pointed out.[6]

Crowdsolving comes about via sites such as www.getyourcarback, Facebook and Reddit pages dedicated to finding individuals, or providing information about crime – for example, the disastrous attempts to find the Boston Bombers. There are also sites like www.helpfromhome.org, or www.innocentive.com  where the challenge is to find a person or for example, to match latent fingerprints. Devices such as automatic number plate recognition, speed cameras and lie detectors seemed to fit here, as they provide data from the crowd that is then analysed by law enforcement. This could arguably fit into crime management, but we felt that one possible dimension that could be applied was the notion of witting or unwitting provision of data or intelligence from the crowd. A crowd can self-surveil and happily give up its data, or be surveilled, unknowingly.

5 CONCLUSIONS

5.1 Parallels between crime and health

We found some interesting primary parallels between the clusters found in the health social machines and the policing social machines. These seemed to diverge in the area of salience, feedback, transparency and surveillance.

Similarities were that we can share consultations with “crime professionals” in a PatientsLikeMe context. People experiencing crime can make similar judgments to those made by the users of PatientsLikeMe. If their experience of dealing with crime and of receiving advice from professionals were shared on forums, then as with the health context, this would create more transparency about the ways in which crime professionals do their jobs, and provide peer-based scrutiny. This could make the records of these professionals visible and provide public reputations that would then enable decision-making about trust, if done with enough regard to maintaining some privacy for those in the public eye.

Where social machines enable the exchange of information they act in a similar way to the health social machines that function as answer gardens; they have an emotional support function; where people have experienced crime the impact can be enormously damaging and there are plenty of forums where people can support each other. The same problems with crowdsourcing knowledge occur; well-documented in [12], which cites controlled studies looking at bias, confirmation bias, illusory correlation and explaining away in the knowledge realm of making causal links between symptoms, causes and treatments; this is just as much of concern here, if not more so. In fact there is debate over causes of crime, and further debate over whether criminology is epistemologically resourced to explore these issues, with police often resorting to “crime science” in an attempt to avoid them.

5.2 Incentives

Money, anonymity, gamification and social conscience are seen throughout as incentives. Money is sometimes  used as a reward for reporting, sometimes on Crimestoppers, often there are rewards for capturing criminals, and Internet Eyes apparently pays people to identify criminal acts. Some of the crowdsolving platforms offer rewards; however, we found that money appeared very little as an incentive; with the emphasis being more on social incentives. Gamification occurred, with the possibility of trivialising serious crime. The site YouBeTheJudge does almost the opposite, austerely gamifying  sentencing so that one competes against the judge and sentences provided by ‘amateur judges’ are compared against “real-life” decisions and explanations provided. Crimes are given “moral panic” headings with the crime then broken down in such a way as to deflate the outrage factor and provide facts in a clinical way. This may well have the effect of  reducing “fear of crime” engendered by mass media– in effect Judicial P.R.. Its aim is to show “users how judges and magistrates decide on the sentences they pass…by explaining how the decision-making process works.”  This is a crime social machine that detaches discussion of sentencing from the mass media.

For crowdsolving problems based on hashtags, or on “find a person” Facebook pages, or Reddit threads, incentives seem more closely linked to being a good member of society.  It is possible that there is some atavistic form of incentive involved in some of the shaming behaviours seen on some of these pages that could be explored with regard to gamification.

Social encouragement is more noticeable in many crime apps, with morality being an obvious incentive to take part. Some cheering on is noticeable in police Facebook and Twitter notifications, as with exhortations to report crime to help keep society orderly. There was little overt evidence of people being encouraged to compete. Here incentives seem based on goodwill, although as ever with the web, one can ask whether sites or forums that generate comments are in fact also participating in link-building, for the purpose of raising funds through advertising revenue. There can be other more complex motives. The hero complex is a well-documented psychological condition, where for example, “vanity” crimes are committed by security guards, where people create havoc in order to help people avoid it. [4] There is an interesting psychological area, retaining to extra-judicial self-help and digilantism, which is where civilians enjoy feeling that they are “fighting crime”. [3]

Where the policing social machines are more anarchic, there is a suggestion of “trolling”, with members competing to seem the most detached from the horrors that are being shown. This then suggests another dimension, aside from knowledge, risk & fear – there are the prurient, potentially dissolving into the psychopathic onlookers, where viewing and commenting can dissolve into deviant or criminal behaviours. In all of this, an underlying fear might be that the capacity of the World Wide Web to amplify on a huge scale a citizen’s desire to do the right thing, could lead to problems with mob-justice.

Where we examined sensemaking, as social solving and social judgment, there was little evidence of provision of structured elicitation processes, whereas we found the risk apps depended somewhat on this, allowing users to get precise knowledge concerning locations and times of crimes, and evaluation of  the accuracy of the information.

Under social judgment we saw often quite chaotic responses, and not orderly diagnoses concerning crime. There are television shows that are gladiatorial in nature (Judge Judy in the U.S. for example, or Jerry Springer), where the audience is encouraged to pass judgement in the form of statements, providing evidence, or jeering at persons brought in ostensibly to have their problems dealt with. Although these are weak examples of policing social machines as they are often about disorder, and use an older technology than the web, they are examples of gladiatorial policing social machines, that offset the more orderly example given by You Be the Judge. This site does elicit information  from naïve “crime-fighters” in a highly structured, non-emotive way, with this information of use as “polling” information about what judgments self-selected visitors to the site would make in particular cases and presumably how this then reflects on current opinion on sentencing policy. This certainly provides more objective information on sentencing policy than the tabloids. In the first case, we see crime and judgment as spectacle, as entertainment; in the second, a more educational aspect.

5.3 Challenges

Most challenges seemed to be sociotechnical. As with the Health social machines survey, we see dangers in self-diagnosis or self-report of crime, where data is crowdsourced. How do we know that reports can be verified? Detection involves a weighing up of the facts; where reports come in en-masse this becomes critical. Anecdotally it seems that public trust in Crimestoppers is not necessarily a problem; but that out of the masses of calls that they receive in the U.K. only about 5-10% appear to have substance.

Another challenge, emerging from our examination of incentives was of how to preserve anonymity in situations going from peer-to-peer knowledge gathering to official gathering. It was notable that when crowdsolving occurs there is less concern about anonymity, where official bodies elicit information, anonymity comes into play. Looking at the U.K. Crimestoppers’ Google Analytics snippet shows “ _gaq.push([‘_gat._anonymizeIp’]);” implying that Crimestoppers remove any I.P. information that Google might supply, although it is not clear whether they have any “social notifications” switched on that would then allow them to track users who have bookmarked the site and are discussing it on forums. Google themselves though may have I.P. address information, even if this information is not pushed through to the analytics that Crimestoppers receives from Google. So it is not certain that anonymity is preserved – while Crimestoppers might not have the information, Google probably does. This is to be borne in mind with attempts to understand behaviour of users on a site in order to make the site function better – if anonymity is promised as an inducement for reporting crime, then methods of tracking users start to affect that promise, in a viciously circular problem.

If visitor data can be properly anonymised then this is worth bearing this in mind for future site design, but it is here that we see technological interfaces blurring, perhaps deliberately, organisational boundaries and knowledge for the user of who it is that they are actually providing information to.

Tracking user behaviour took us to the question of data collection and processing, from the collection of “raw” data for analysis by the crowd or by crime professionals i.e. analysis of video footage, or automatic number plate identification, to communications interception by government agencies. Salience and feedback seemed more publically lacking or were managed more dogmatically among the policing social machines than among the health apps; i.e. various government departments send reminders to pay tax, to get your vehicle assessed to government standards of safety, thus preventing people from breaking the law through forgetfulness. There are various forms of mass-push notifications such as schools automatically texting if a pupil is absent, or if payments are not made for school meals. These do not operate from individuals self-surveilling. Monitoring devices are external and authoritative: registers, teachers, databases. More secretively, there are salience and feedback systems that monitor individuals at airports to see if they are showing physiological signs of stress that might indicate that they are suicide bombers. The closest self-monitoring we found was beepers on satellite navigation systems that tell you if you are speeding, or if you are about to pass a traffic camera. It is not clear whether these beepers are intended as a “nudge” technology, that asks whether you know you are about to commit a crime, or whether it is to prevent being caught.

There are increasingly seen to be ethical problems with policing by machine, where the temptation to gather data leads to muddied social, technical and legal issues. The idea behind panoptic programs such as PRISM, Carnivore and Echelon is that we have the means to gather data that might reveal intelligence on terrorist activity – so why not use it? The seductive appeals to give up a little privacy, and some civil liberties in order to be secure, have been well-rehearsed, although in fact it is not clear who exactly becomes secure and what security really means in terms of risk management as opposed to actual knowledge of crime for the government, the intelligence agencies, or the citizens being surveilled. [6, 15, 19, 23] And monitoring of the monitors seems to be driven by the media, leading to tussles between intelligence agencies and individual journalists and whistleblowers, played out in public. While governments might have moved from Dionysius’ Cave – to attempting to know all by pervasive listening, it seems very likely in the context of Information Warfare that their strategy is as much about control by fear, as it is about gathering knowledge of crime from us, of using panoptic mechanisms so that none of us is certain when and how they are surveilled. It is never clear how much of the interplay between governmental surveillance and those who observe it and protest it, is for strategic purposes as much as to reveal real outrage that should, theoretically drive reform. However, looking at the problem of surveillance from the perspective of Social Machines leads us to conclude that we happily self-surveil for personal or consumer reasons. A possible avenue for further investigation is that of understanding how policing and government can sit in an ecosystem that acknowledges data generated in such ways i.e. our smartphone and Facebook data, and to think more deeply about how we can encourage people to feed such self-generated data into systems that fight crime, without sacrificing privacy or loss of control, as Zittrain forecasts, this will probably be the biggest problem for privacy. It is worth thinking about we monitor those who use our data, without being distracted by media posturing on this.

There is also a movement towards seeing crime in terms of risk, with insurance companies weighing in. There appears to be a trend in insurance companies favouring driving enforcement via in-car devices, and an increasing interest in the Internet of Things policing us via insurance companies. There need be no intervention from police, unless there is an accident, just a simple punitive increase in insurance costs where the driver speeds. These sorts of crime social machine are deeply embedded in the concept of the risk society, where sometimes misleading statistics swamp individual’s rights.

This led us back to the economics of risk and fear – and the point during the lifecycle of a crime at which an app asks for money, as a hugely differentiating factor. Where someone is reporting a crime in order to defend themselves and is asked to provide money to do so, it seems that the economics of the app is based on cruelly leveraging the victim’s fear of crime. This is true to a lesser degree, where fear of crime is used to sell an app that provides knowledge of crime. Where such data is free, as on police.uk then it seems to come from a perspective of assurance.

While the survey of health machines addresses the problem of self-reporting and concludes that crowdsourcing self-reported knowledge can result in bias, this applies equally to the official U.K. crime data sets coming from the Home Office – crime statistics, Open crime data and the British Crime Survey, said to reveal the true “dark figure” of crime. The problems with crowdsourcing knowledge are well-documented in [12], which cites controlled studies looking at bias, confirmation bias, illusory correlation and explaining away in the epistemological realm of making causal links between symptoms, causes and treatments; this is just as much of concern here, if not more so. In fact there is debate over causes of crime, and further debate over whether criminology is epistemologically resourced to explore these issues, with police often resorting to “crime science” in an attempt to avoid them. The survey of policing social machines shows that this problem of modeling causation applies just as much to self-report crime information as to health. One solution might be to index crime data, creating meta-data about the source of data sets. Official crime data comes from policing performance data in many cases; B.C.S. data from surveys on perception of crime.

Ulrich Beck has pointed out that there is little academic research on the subject of I.C.T.s in policing  and Manning [14] has suggested a lack of evaluation of “interactions between technology and social organization and practices because little has been written about the practices, constraints, and opportunities associated with the use of the new information technologies”. Their concerns do not just apply to the police as a force or police as a service, but can equally be said to apply to the use of these technologies for policing in general, or the pluralities of security provision.

5.3.1 Crime and Transparency

Many challenges come from thinking about how these web-mediated crime technologies are affected by transparency when these are held to account. Policing social machines can make people more crime-literate – some of the increase in reporting rates of crime can come from people becoming crime aware. So, literacy about crime concerns and considerations, and a mechanism by which individuals can get the best crime expertise available, whenever and by whomever is best placed to help, is needed. But we can then see transparency coming into play with some associated issues: we should consider ways of mapping awareness of crime, in order that reports can distinguish between increases in crime itself, and increases in awareness of crime leading to increased reporting. We must also consider the uneven, pluralised provision of policing services to the public. If we marry public self-policing and monitoring of the police themselves to an already “rationalised” police performance culture we must beware transparency. It is clear that the structures and criminal activities that dictate how a police officer does her job, and what constitutes good performance, will not be consistent across forces. Technology and the “joy of data” can exacerbate these inconsistencies and lead to irrationality deriving from the impossibility of defining consistent measures across policing. If it is hard to model crime causation, does it not follow that the treatment of a social phenomenon so hard to understand should not be salved by a ‘plaster’ or whitewash of performance data? We need transparency about crime rates and about how those we pay to address crime do their jobs but not when those jobs are dictated by targets, rather than crime.

Looking at transparency and incentives leads back to John Flatley’s comment about police data being affected by “possible perverse incentives associated with performance targets.” Police reporting comes from a target-driven culture; [10] surveillance and reporting can help professionals achieve their targets; where targets are the issue, or the threat of terror, the incentivisation appears high, but is possibly to the detriment of individuals in some cases, or in ways which threaten civil liberties and privacy wholesale. There can also be security ramifications from too much transparency and this could create more risk for survivors of crime.

Surveillance crime social machine technology seen via transparency again has considerably different consequences, compared to the health context. In the health context, monitors and sensors are used to understand lifestyles and devise appropriate interventions. In the crime context, such devices where employed by the state, can be representative (often) of the state’s control over the individual. We only need to scan recent headlines on the N.S.A. to see exactly how in this context, these sorts of technologies are seen as detrimental to our liberties and as privacy-threatening. There are many instances where their use is accepted, such as traffic cameras, the use of tagging on offenders, where in fact, the effect is the same as with the health context, “tagging gives specialists unprecedented, accurate access to an individual’s daily activities. This information could give clinicians valuable context for understanding each patient’s lifestyle between visits, in order to devise more appropriate interventions”. [11] Where these machines are employed by individuals, their use is less contested, but as with much of the crime context there is a grey area where it is not clear how machinery is used by persons who fall somewhere between the state and the individual.

Technology is neutral but its use can be political – no technology is going to provide a solution unless it can capture the complex socio-legal-economic processes that interweave crime and criminality. So another area worth examining is the balance between the use and hidden or open ownership of technology or infrastructure by officials fighting crime or by corporations, as opposed to private citizens.

Looking at crime social machines it becomes apparent that these are complex, organic, evolutionary systems and that finding blueprints to build new crime social machines needs a thorough examination of the ways in which these individual, social, moral, legal and psychological factors come into play when humans are connected en masse via new technologies We should not necessarily focus on complex technical problems to be addressed, but align these with softer issues associated with for example eliciting sufficient information to understand both what it is a victim thinks they have experienced, encourage them to return to populate their report, preserve anonymity while collecting sufficient behavioural data to understand their interaction patterns.

 

5.3.2 Implications for police.uk

Having carried out this classification we see that the categories we found allow us to say that the Home Office data sits within a potentially quite distinct sub-category of crime social machine, that of informing and assuring. It does not provide a statistically predictive service altering short-term behaviour but it serves to let people think about crime in their neighbourhood and take longer term steps to help.

We know that open Home Office crime data (and the official crime statistics that it comes from) are largely performance data, and that the categories recorded constitute about a fifth of potential crime categories in the U.K.. It is knowledge-based and sits within the context of assurance and mapping as a way of scientific understanding, as opposed to similar looking crime map sites that leverage fear of crime coming from risk analyses to sell services including data itself. It is possible to differentiate these similar-looking sites by investigating which sites they link to and which sites are linking to them. We suggest that police.uk sits within a new distinct generation of ideological transparency, geared around the concept of the ‘open’ with the potential to be allied with sub-sets of data appearing from crowd-sourced information that provide a supporting context.

Meta-transparency means that we view both data and its genesis together, so that although we can say that this is data that has been produced as part of the policing performance culture, and therefore might be reflective of target-setting, this understanding of data provenance should be a given, in the culture of open. We suggest that the data is about policing, rather than crime. It does not so much predict where crimes might occur, as the risk apps seem to. The data currently maps trends in policing, as well as reported crime and the relationship between the police, the public and policy. It is shaped by the systems it moves through and the processes it undergoes, and the way in which it is mandated. To understand why a crime is reported and why it appears on a map, we have to understand the confluence of all these things.

It is possible that when people look at the crime maps the Home Office produces, they know crime is recorded, documented and processed; it is monitored, they can see crime-fighting across the U.K.. It appears that the Home Office data offers epistemological “knowledge that” a crime was committed, that the police captured it. Other, similar sorts of reporting seem actually with their risk-mediation to be about “knowing-how” a route to understanding how crimes were committed with their real-time reporting.

While we were intrigued to notice the ways in which the crime social machines paralleled the health classifications, this led us to wonder more about how to define both treatments of both crime and health. Just as we might struggle to define health as a positive, (one notices its absence more than its presence) it is hard to explain exactly how technology can help to mediate crime-fighting, when sometimes the activities involved in addressing crime can seem to border on voyeuristic, atavistic, psychopathic, judgemental behaviours. As ever, the web has the peculiar property of externalising and objectifying our subjective moral compasses, both in the way that we come to judgement and in how we decide what behaviours are valid in addressing crime.

We return to the question of how the classification might start to answer some social, technical and policy issues, including those coming from police.uk. We can  see (a) how crime data is being used, the apparent worry about “faked data” dissolves into a more sensible discussion of the social origins of policing data, and that perhaps if the target culture were removed this might then remove perverse incentives to “shape” data according to often irrational targets. (b) We have seen how data can be crowdsourced, and started to examine some of the attendant problems of anonymity, evaluation and incentives. These first two points presumably help not only the public, but the police themselves.  (c) We have asked whether data and apps such as these can help us to address crime, without increasing fear of crime, and looked at the way in which the information economy might drive some designers to sell crime data or a sense of safety through leveraging fear of crime.

5.3 Policing Social Machine signatures

Along the third dimension we had projected that we could indicate how crime is presented or processed: whether the system involved sensing, perception, reasoning, knowledge, planning, learning, communication, or other forms of interaction. As we applied our clusters to the data and considered our conclusions we also added the following to be applied: whether inputs were collectively sourced or funnelled via mechanistic or human processes, whether the data was open or not in inputs/outputs, how strongly bounded in terms of time, platforms and definition, or networks of people, sensors or machines that provide or process the data, whether technology is recent or well-established, who owns it, whether it is provided or processed wittingly or unwittingly, what the incentives are that enable the machine, whether there are ethical risks pertaining to the data, whether the technology actually works only in one direction, or whether it is easily subverted, and the degree of certainty about the data that is produced.  This is future work, but we suggest that doing so will enable us to more clearly evaluate crime technologies and establish policing social machine “signatures” that will quickly identify the potential success, risk or threat posed by these emerging technologies. We can then decide whether money is well-spent on such policing social machines, and how policy should be set regarding their use, both by policing services and the public.

 

6. ACKNOWLEDGEMENTS

The work in this paper was funded by the Research Councils UK Digital Economy Programme, Web Science Doctoral Training Centre, University of Southampton, EP/G036926/1 and by SOCIAM: The Theory and Practice of Social Machines, funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 comprising the Universities of Southampton, Oxford and Edinburgh.

7. REFERENCES


[1]          Berners-Lee, T. and Fischetti, M. 1999. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. Orion Business Books.

[2]          Byrne Evans, M. et al. 2013. Crime applications and social machines: crowdsourcing sensitive data. (May 2013), 891–891–896–896.

[3]          Connectedness, Digilantism, and Trauma | 47/78 on WordPress.com: http://fortysevenseventyeight.wordpress.com/2013/05/16/connectedness-digilantism-and-trauma/. Accessed: 2013-08-27.

[4]          Experts Say “Hero Syndrome” Not Common Among Police – New York Times: http://www.nytimes.com/2004/08/02/nyregion/experts-say-hero-syndrome-not-common-among-police.html?sec=health. Accessed: 2013-08-27.

[5]          Friedewald, M. 2009. Privacy Threats in the Ubiquitous Information Society: An Analysis of Trends and Drivers. In: Proceedings of the WebSci’09: Society On-Line, 18-20 March 2009, Athens, Greece. (In Press). (2009).

[6]          Future of the Internet – And how to stop it.: 2014. http://futureoftheinternet.org/category/future-of-the-internet/. Accessed: 2014-02-23.

[7]          Glaser, B. and Strauss, A. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine Transaction.

[8]          Heider, F. 1958. The Psychology of Interpersonal Relations. Psychology Press.

[9]          How Google Might Classify Queries Differently at Different Data Centers: http://www.seobythesea.com/2011/06/how-google-might-classify-queries-differently-at-different-data-centers/. Accessed: 2014-02-21.

[10]        Intelligent Policing – Triarchy Press: http://www.triarchypress.net/intelligent-policing.html. Accessed: 2014-02-22.

[11]        Van Kleek, M. et al. 2013. the crowd keeps me in shape: social psychology and the present and future of health social machines. (May 2013), 927–927–932–932.

[12]        Van Kleek, M. et al. 2013. the crowd keeps me in shape: social psychology and the present and future of health social machines. (May 2013), 927–927–932–932.

[13]        Klein, G. et al. 2006. Making Sense of Sensemaking 2: A Macrocognitive Model. IEEE Intelligent Systems. 21, 5 (Oct. 2006), 88–92.

[14]        Manning, P.K. 2011. The Technology of Policing: Crime Mapping, Information Technology, and the Rationality of Crime Control (New Perspectives in Crime, Deviance, and Law). NYU Press.

[15]        Moor, J.H. 2004. Towards a theory of privacy in the information age. Computer ethics and professional responsibility. (2004), 249–262.

[16]        O’Hara, K. 2010. Intimacy 2.0: Privacy Rights and Privacy Responsibilities on the World Wide Web.

[17]        O’Hara, K. and Shadbolt, N. 2008. The Spy in the Coffee Machine: The End of Privacy as We Know It. Oneworld Publications.

[18]        Open Data Comes to Market: 2013. http://eprints.soton.ac.uk/350043/1/Open Data Comes to Market report final.pdf.

[19]        Pace, S. 2004. A grounded theory of the flow experiences of Web users. International Journal of Human-Computer Studies. 60, 3 (Mar. 2004), 327–363.

[20]        PASC to take evidence on Crime Statistics – News from Parliament: http://www.parliament.uk/business/committees/committees-a-z/commons-select/public-administration-select-committee/news/crime-statistics3/. Accessed: 2014-02-21.

[21]        Regan, P.M. 1995. Legislating Privacy. The University of North Carolina Press.

[22]        RENNIE, D.L. et al. 1988. GROUNDED THEORY: A PROMISING APPROACH TO CONCEPTUALIZATION IN PSYCHOLOGY? No Title. Canadian Psychology/Psychologie Canadienne. 29, 2 (1988).

[23]        Schoeman, F.D. 1984. Philosophical Dimensions of Privacy: An Anthology. Cambridge University Press.

[24]        Shadbolt, N. 2011. SOCIAM: The Theory and Practice of Social Machines. Engineering and Physical Sceinces Research Council.

[25]        Solove, D. 2011. Nothing to Hide. Yale University Press.

[26]        Statistics, C.S.-H.O. Crime Trends. Office for National Statistics, Government Buildings, Cardiff Rd, Newport NP10 8XG, info@ons.gov.uk.

[27]        The Future of Reputation gossip, rumor, and privacy on the internet: 2007. http://docs.law.gwu.edu/facweb/dsolove/Future-of-Reputation/. Accessed: 2014-02-23.

[28]        United States Patent Application: 0100318516: http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=1&p=1&f=G&l=50&d=PG01&S1=20100318516.PGNR.&OS=dn/20100318516&RS=DN/20100318516. Accessed: 2014-02-21.

[29]        Violent crime costs the UK economy £124 billion, report suggests: http://www.telegraph.co.uk/news/10013830/Violent-crime-costs-the-UK-economy-124-billion-report-suggests.html.

 

 


[1] “In our social machines, – all the carefully organized device of politics, industry and war, – I see a similar exteriorization of distorted humanity, similarly precluding the general exercise of reason, enforcing intolerance and destroying liberty.” [50]

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *