Author Archives: Paul Matthews

About Paul Matthews

I'm a Senior Lecturer at the University of the West of England.

Modelling knowledge norms

Image © Windell Oskay

Image © Windell Oskay

Modelling helps us to really think through theories and understand their ramifications. The process supports clearer thinking, the ironing out of logical inconsistencies and the comparison of alternatives. That said, modelling is not without its shortcomings. What we choose to include or exclude from our model and the assumptions we make will greatly determine the outcome we get. But if we bear these in mind, models have the enormous benefit of helping us to quickly simulate and assess the possible outcomes of policies, practices and social architectures1.

In philosophy, modelling is less widely practiced than intuition, thought experiment and the logical formalisation of philosophical concepts. That said, some philosophers have started to realise the power of agent-based modelling for the comparison and contrasting of theoretical positions. Modelling has the advantage of being able to highlight interactions and emergent properties in complex systems, where feedback loops or perverse incentives might derail a model that at first sight seemed equitable or ethical. For epistemology, and especially social epistemology, modelling can help to provide insight into how rational agents use and communicate testimony and how conditions for knowledge exchange itself can vary or be manipulated.

As I take a holistic view of epistemology to include epistemic or doxastic systems, I also will cover here models that less directly related to knowledge but which I feel have pertinence, particularly when related to knowledge exchange in online communities. I will therefore start with a treatment of norms in their widest sense before focusing in more on epistemic norms.

A norm can be defined as follows :

A norm .. means fulfilling a generalized expectation of behaviour. When members of a society violate the societal norms, they may be punished – Habermas, quoted in Savarithamu et al (2009)

They are typified the idea of a communitarian contract that may be at odds with individual interests:

Norms are conceived and spoken of as imposing obligations when the general demand for conformity is insistent and the social pressure brought to bear upon those who deviate or threaten to deviate is great. Norms.. are believed to be necessary to the maintenance of social life of some highly prized feature of it.. the conduct required may.. conflict with what the person who owes the duty may wish to do – Ullmann-Margalit (1977)

Evolution of Norms in General

As Axelrod notes, norm definitions themselves are often not tested or properly elaborated, meaning that our understanding of how norms are formed or change can be quite poor. Axelrod’s landmark paper sought to investigate norm formation through the design of a model inspired by game theory and evolution and predicated on the power of punishment to help establish and uphold norms. The two main parameters in the model were boldness and vengefulness and the two main actions available to agents were to “defect” (this could apply to anything norm-related such as littering or smoking in a public place) and to “punish” those observed defecting. The likelihood of defection was determined in part by an agent’s boldness, and the likelihood of exacting punishment was determined by an observing agent’s vengefulness. Points were awarded for successfully defecting, but more were taken off if the defection was observed and punished.

Between runs, the most successful agents replicated the most, passing on their boldness and vengefulness characteristics, whereas unsuccessful agents died without replicating. Importantly though, a 1% mutation rate meant the possibility of introducing random boldness and vengefulness in the offspring.

The outcome of Axelrod’s initial model is that defection behaviour mostly falls to 0, as the vengefulness punishment was considerable. But in other cases as vengefulness lowered, then the benefits of defection became overwhelmingly strong, leading all the agents to defect repeatedly. The model thus had two stable states, respectively representing a successful norm being established and the complete failure of the norm.

I have created a version of this model using Netlogo, available on the modelling commons. In Axelrod’s paper, the undesirable outcome happened on 2/5 runs of the model, though in mine it only seemed to happen occasionally (seeming to depend on the right mutations happening at the right time) . This must be largely due to implementation differences or a bug in mine! Interestingly, the model tends to stabilise on low or 0 boldness and only medium vengefullness – so only occasional punishment is needed to establish the norm.

image of Netlogo model 1

Typical model run: boldness drops away to 0, vengefulness stablises midway

Netlogo model 2

In rare cases, defections initially drop away, but then suddely soar along with boldness : Community becomes over-lenient?

For Axelrod, the risk of the norm overturning clearly indicated that other factors must be at play in the establishment of norms. He introduced the possibility of “metanorms” which is the norm of enforcing the norm itself. If agents observed a defection but chose not to punish it, then they themselves were punished. Axelrod showed that use of the metonorm leads to more rapid stabilisation on a more “vengeful” community.

With "metanorms", agents are punished if they see defections but do nothing about it. This also leads to the development of a largely norm-observing (yet more vengeful!) community

With “metanorms”, agents are punished if they see defections but do nothing about it. This also leads to the development of a largely norm-observing (yet more vengeful!) community

Axelrod completed his paper by noting that a range of other (not modelled) factors must contribute to norm formation: notably power, social proof and reputation. The direct observation of others’ behaviour or the direct intervention of powerful or authoritative figures in norm enforcement or embodiment is something that can be demonstrated from various examples in human social and political behaviour.

It is interesting, and significant, that Axelrod focused on the negative reinforcement of punishment in his study of norms. For me, this seems to take the focus away from what might be the more important constituents of a community – namely the exogenous and endogenous goods that often comes from observing a norm. It is perhaps telling that Axlrod’s real-world examples of the metanorm are cases such as the persecution of African-Americans in 19th Century America.

Epistemic Norms

When we consider social knowledge, punishments do exist and might include: the exclusion of individuals from social circles or the community as a whole, the downplaying or outright exclusion of testimony or the otherwise sidelining of unpopular belief. But alongside these is a plethora of more positive social and individual reinforcements: sense of community, self -efficacy, scientific and social progress to name but a few.

Perhaps the most important norms in social epistemology concern testimony, the transmission of knowledge through communication between individuals. Here, there are at least two views. One, termed non-reductionism, or the principle of credulity, holds that humans are predisposed to accept testimony as a primary source of knowledge and that this has survival value. A default assumption is that the speaker is not trying to deceive or mislead us. An alternative position is reductionist, holding that testimony is only acceptable as knowledge if the hearer is able to reason, compare, or observe corroborating information. Perhaps more on the reductionist side, but also in line with the intermediate position of “epistemic vigilance”, some epistemologists see a necessary accompaniement of the acceptance of testimony being and assessment of the character and credentials of the speaker.

So our candidate testimonial norms (TNs) can be defined as:

The rule “believe others in the absence of conflicting information” is one TN, and “only believe those you know to be reliable” is another. – Mayo-Wilson, 2014

They are competing because:

Those who accept testimony more readily will believe more with higher risk of error, while those who do not run the risk of knowing less – Zollman, 2014

Zollman (2014) has recently attempted to compare the relative merits of agents adopting different types of reductionist or non-reductionist position in a knowledge sharing community.
He finds that on a veritistic dimension, communities of non-reductionist or credulous agents perform slightly better in comparison to subjective reductionists and that the emergent network is larger and more diverse. On the other hand, the reductionist position is more likely to lead to homophily and cliques where the information shared can be either largely accurate or largely inaccurate.

Zollman’s work is similar to that reported by Mayo-Wilson (2013) on testimony in scientific communities. Here, a network-based model can simulate the dissemination of new discoveries. Mayo-Wilson notes that in practice, all testimonial norms tend to converge in the same way and comparitively quickly in comparison to the discovery time, or the actual scientific work that leads to the new knowledge. However, when misinformation is introduced, then the reductionist position is less prone to error given changes in the communicative structure of the network (towards scientific insularity). This is because the more credulous agents suffer by being further removed from the sites of knowledge production and error is compounded.

Output of Mayo-Wilson's model. Credulist positions are more susceptible to misinformation with increased network insularity - Mayo-Wilson, 2013

Output of Mayo-Wilson’s model. Credulist positions are more susceptible to misinformation with increased network insularity.

So both Zollman and Mayo-Wilson have shown that modelling testimony from a network angle can shed light on how knowledge can diffuse most readily, and that these models point to credulous positions being as good or even at times more efficient and egalitarian, given the right network structure.

Between Axelrod’s generalised approach to norms and Zollman and Mayo-Wilson’s more specialised, epistemological context lies fertile ground for the investigation of social knowledge from a holistic, naturalised outlook. When considering online communities, the especially exciting prospect is the comparison / reification of model predictions with actual observed behaviour. This is how we might come to understand more about what makes for successful, thriving communities and how others might be tweaked, or even fixed, through the application of policies and practices suggested by a model.

Footnotes
1. Shout out to Scott Page’s Model Thinking MOOC https://www.coursera.org/learn/model-thinking


References

Axelrod, R., 1986. An Evolutionary Approach to Norms. The American Political Science Review, 80(4), pp. 1095-1111

Mayo-Wilson, C., 2013. The Reliability of Testimonial Norms in Scientific Communities. Synthese, 191(1), pp. 55-78

Savarithamu, B.T.R., Purvis, M. and Cranefield, S., 2009. Social Norm Emergence in Virtual Agent Societies. Declarative Agent Languages and Technologies Iv, 5397, pp. 18-28

Ullmann-Margalit, E. 1977. The emergence of norms. Oxford [Eng]: Clarendon Press.

Zollman, K.S., 2014. Modeling the social consequences of testimonial norms. Philosophical Studies, , pp. 1-13
 

Credibility and Persuasion: A Concept Map and Bibliography

This is an attempt to bring together research relating to credibility on the web. The bibliography will hopefully expand over time and I will organise the papers into groups according to theme. Concepts and definitions of credibility are many and various, but I hope this concept map captures most of the key features:

credibility concept map

Based largely on reviews by Hilligoss & Rieh (2008), Metzger(2007) and Wathen & Burkell (2002) in bibliography. Concept map created with CmapTools

Much of the earlier work into credibility focussed on web pages and there are a smaller number of studies which have assessed social contibutions to collaboratively edited sites. I have also found fewer so far that focus on the impact of particular interface features and use of separate heuristics as opposed to overall ‘page-level’ assessments – these finer-grained details are interesting and important.


HILLIGOSS, B. and RIEH, S.Y., 2008. Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing & Management, 44(4), pp. 1467-1484
http://www.sciencedirect.com/science/article/pii/S0306457307002038

System/Topic: Information seeking generally
Method: Grounded theory. 24 undergrads, kept a diary to record one information seeking task a day for ten days. Followed up with in-depth interview.
Findings: Defined three levels of credibility judgement: Construct (personal definition of credibility including believability, truthfulness and objectivity), heuristic (general rules of thumb) and interaction (specific sources and content cues – including corroboration when they lacked prior knowledge)
Noteworthy: Importance of endorsement-based heuristics (p1477); use of a range of source types in the study; going beyond simple credibility definitions / dimensions

Keywords: concepts, heuristics.


HU, Y. and SHYAM SUNDAR, S., 2010. Effects of Online Health Sources on Credibility and Behavioral Intentions. Communication Research, 37(1), pp. 105-132
http://crx.sagepub.com/content/37/1/105.abstract

System/Topic: Web-based health information
Method: Assessed behavioural intentions of students (N=500) based on seeing screenshots of the same information framed as a blog, home page, forum or web site. Also gathered ratings of message attributes and the extent of perceived information gatekeeping and source expertise.
Findings: Source of information was significant in credibility judgements. Blogs and discussion forums seen to be more complete. Perceived level of editorial and moderator gatekeeping was signifcant.
Noteworthy: Need for more nuanced understanding of source layering and interaction. Importance of collective (forums, sites) over individual gatekeeping (blogs, homepages).


IDING, M., CROSBY, M., AUERNHEIMER, B. and BARBARA KLEMM, E., 2009. Web site credibility: Why do people believe what they believe? Instructional Science, 37(1), pp. 43-63
http://dx.doi.org/10.1007/s11251-008-9080-7

System/Topic: Web-based information
Method: Content analysis of student assessments and student ratings of sites related to their courses.
Findings: Information content quoted as most important, though name recognition of the source was also significant. Look and feel also important. Vested interests of sources – education tended to be treated positively, commercial interests negatively. Corroboration important in assessing site inaccuracies.
Noteworthy: Persuasiveness of personal testimony appearing on a page (p58); recommendations for web evaluation training – including teaching corroboration skills.


JIANU, R. and LAIDLAW, D., 2012. An evaluation of how small user interface changes can improve scientists’ analytic strategies, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2012, ACM, pp. 2953-2962 http://doi.acm.org/10.1145/2207676.2208704

System/Topic: Scientific analysis interfaces / visual analytics
Method: Students solving protein interaction tasks with an experimental online interface. One group given amended interfaces between sessions to add “nudges” toward better use of the interface, simultaneous consideration of evidence (to avoid over reliance on working memory) and more evidence gathering for hypotheses.
Findings: Significantly more hypothesis and evidence gathering and hypothesis switching observed in the test conditions. No evidence of confirmation bias but conjunction bias and single attribute analysis demonstrated.
Noteworthy: Study well informed by cognitive theory and attempted to mimic a real world task and minipulate the interface for more “normative” outcomes.

Keywords: analytic biases; persuasive technology; visual analytics.


METZGER, M.J., 2007. Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), pp. 2078-2091
http://dx.doi.org/10.1002/asi.20672

System/Topic: Web-based information
Method: Review of own/others studies
Findings: Primacy of presentation and design features in credibility judgements. Scope and accuracy also most often used in assessments by users. Identity and authority less so. Verifying author's qualification rarely done- though most important dimension for credibility. Credibility judgements are situational – so consideration of the receiver's state is important. Need for a dual-process oriented model- distinguish peripheral/heuristic from central/systematic assessments.
Noteworthy: Peoples reported behaviour different from their actual behaviour when observed (fewer cues used in practice) – hence a greater need for direct observation; need for simpler checklists or heuristics that users can more easily apply;.


METZGER, M.J., FLANAGIN, A.J. and MEDDERS, R.B., 2010. Social and Heuristic Approaches to Credibility Evaluation Online. Journal of Communication, 60(3), pp. 413-439
http://dx.doi.org/10.1111/j.1460-2466.2010.01488.x

System/Topic: Online information seeking
Method: Focus groups with range of ages and abilities
Findings: Use of social cues (testimonials, ratings) widely used as credibility clues, though a balance of reviews more trusted than when they are all positive. Social confirmation of opinion important. Trust placed in the endorsements of "enthusiasts" – particularly active users. Set of heuristics at play in source and message credibility judgements: reputation, endorsement (bandwagon), consistency (corroboration), expectation violation, persuasive intent.
Noteworthy: Stresses the importance of social cues and arbitration, unlike many previous credibility studies. Notes new model of bottom-up authority enabled by the social web.


NÉMERY, A., BRANGIER, E. and KOPP, S., 2011. First Validation of Persuasive Criteria for Designing and Evaluating the Social Influence of User Interfaces: Justification of a Guideline. 6770, pp. 616-624
http://dx.doi.org/10.1007/978-3-642-21708-1_69

System/Topic: HCI expert consultation
Method: Development of an evaluation grid for persuasive interfaces incorporating static (crediblity, privacy, personalisation, attractiveness) and dynamic (Solicitation, priming, commitment, ascendancy-addition) features. Validation of grid with HCI experts – used to evaluate 15 sample interfaces.
Findings: Claims high level of correct identification of persuasive features and hence expert agreement.
Noteworthy: Attempt to provide a standard tool and framework for evaluation of persuasiveness of interfaces – "persuasive strength". No real ethical judgement on persuasion, but largely construed positively. Criteria for the credibility dimension were that it should be possible to identify the reliability, expertise level and trustworthiness of the information.


SUNDAR, S.S., KNOBLOCH-WESTERWICK, S. and HASTALL, M.R., 2007. News cues: Information scent and cognitive heuristics. Journal of the American Society for Information Science and Technology, 58(3), pp. 366-378
http://dx.doi.org/10.1002/asi.20511

System/Topic: Online news (Google News)
Method: Online assessment of undergraduates (500+) who rated items as newsworthy, credible and worth clicking on. Experiment manipulated the sources, freshness and number of related sources to assess their impact.
Findings: Credibility of the source was overriding factor in assessments, though freshness and related articles became important with less immediately credible sources. Interestingly, number of related sources was bipolar – either a high or low number made the story more attractive.
Noteworthy: Claims for the perceived objectivity of news aggregators, because they are completely automated hence seen by some to be neutral – evidence quoted for this in the introduction. Dual process theory and cognitive heuristics in relevance judgements.

Keywords: cues, heuristics.


SUNDAR, S.S., XU, Q. and OELDORF-HIRSCH, A., 2009. Authority vs. peer: how interface cues influence users ACM, pp. 4231-4236 http://doi.acm.org/10.1145/1520340.1520645

System/Topic: product reviews and purchase intention
Method: Students and the public viewed camera product pages with Editor’s choice “seals” from authoritative or non-authoritative sources and user reviews (majority negative or majority positive) to tradeoff bandwagon and authority heuristics
Findings: Bandwagon (positive reviews) were found to be more powerful than the authority of the seal. Authority cues came more into play when bandwagon cues were contradictory
Noteworthy: Authors suggest adaptation of interface to user’s level of interest/involvement ( as high involvement should mean less use made of social cues)

Keywords: psychology; user interfaces; web design.


WATHEN, C.N. and BURKELL, J., 2002. Believe it or not: Factors influencing credibility on the Web. Journal of the American Society for Information Science and Technology, 53(2), pp. 134-144
http://dx.doi.org/10.1002/asi.10016

System/Topic: Credibility generally but focusses on online health info.
Method: Review
Findings: A range of credibility dimensions and criteria identified in different studies. Notes that most focus on source and message credibility and there is less emphasis on the receiver. Also, that studies dont often differentiate message and medium.
Noteworthy:
Discussion of Tseng and Fogg (1999)'s gullability error or "blind faith" (typical of novices) and incredulity error or "blind skepticism" (typical of experts); conflict between credibility attributes (MDs rated as credible for health information on the web despite having least reader-friendly messages).

Keywords: review, concepts.


WESTERMAN, D., SPENCE, P.R. and VAN DER HEIDE, B., 2012. A social network as information: The effect of system generated reports of connectedness on credibility on Twitter. Computers in Human Behavior, 28(1), pp. 199-206
http://www.sciencedirect.com/science/article/pii/S0747563211001944

System/Topic: Twitter users as sources (of H1N1 information)
Method:
281 participants shown mock Twitter profiles, manipulating follower numbers and follower/following ratios. User credibility ratings and bahvioural intention indicators.
Findings:
Overall follower count not correlated with perceived competence of source (though medium follower numbers slightly more trustworthy).A narrow gap between follower and followed count perceived as more competent.
Noteworthy:
A proposed "Goldilocks" heuristic that middle ground follower and followed counts are most trustwothy. Links made to Social Information Processing Theory (SIPT), though notes that a shorter interaction history may be needed on the social web than that theory suggsts.

Keywords: Social media; News; Online credibility; System-generated cues; Computer-mediated communication.

skeptics v mystics

The Believing Brain

by Michael Shermer
ISBN:978-1780335292

This book is an interesting mix of anecdote, psychological theory, politics, history of science and conspiracy debunking – all connected to beliefs and human tendencies toward irrationality and partisanship. While it jumps around quite a lot, there is a good middle section on psychological theory, particularly relating to variable interval reinforcement and pattern recognition and how these can lead to superstition.

The author – who himself admits to a change in belief from very religious to atheist as he discovered science – is clearly coming from a skeptical viewpoint, but at the same time shows balance. While noting that the gullable might tend to see patterns or effects where there are none, he admits the possibility that the skeptical may in turn fail to recognise new patterns that do actually exist. Similarly, in describing the tendency for political beliefs to polarise based on core values he asks why we can’t share some of both sides’ approaches.

Science is seen as our best response to the cognitive biases we may show and Shermer illustrates this in an entertaining history of astronomical discoveries, such as Saturn’s rings and the island universe. The examples prove quite nicely how scientists my exemplify both open-mindedness in admitting new knowledge, but also bloody-mindedness in hanging on to theories that are rapidly losing traction!

Here are some notes I made while reading the book:

My Sketchnote on Flickr

Visualising changes over time

There are a range of approaches for visualising non-spatial data over time. For me, to be effective and useful the method needs to:-

  • Illustrate, uncover or summarise patterns in the data that would be less visible otherwise;
  • Preserve relationships and proportionality;
  • Be based on sound and understandable underlying statistical/mathematical methods;
  • Ideally have open tools or scripts available for use

The following post seeks to summarise some good techniques using a few key publications and examples.

Stacked Graph Approaches for Linear Series

These approaches work for data that can be aggregated so that the overall size of the graph is meaningful.

Lee Byron’s Stream Graph uses Last FM Audioscrobbler data to illustrate a user’s listening habits over time. According to the authors, the visualisation method was popular with users who were able to associate their listening habits with life events and time of year. Data is extracted in the form of number of plays per month, per artist. The visualisation approach is similar to ThemeRiver – a smoothed, stacked graph centered on the origin and expanding either side – but adds some further algorithmic smoothing. It also selects an ordering which places the early onset series toward the centre and subsequent onsets at the top and bottom. The software is available via the above link and uses Java/Processing. There is also a javascript implementation in the d3.js library

Last FM Theme River

IBM History Flow was used to create compelling images of editing patterns to Wikipedia pages and illustrates the number of users involved in addition to the page length (so the more radical revisions are visible as large “steps” – edit wars become zigzags.). The researchers used the Wikipedia revision history and token matching to detect changes between versions. The program was written in Java but is not available for direct download.

image: flickr/viegas

Spiral Graphs for Cyclic Series

Here, the axis is arranged spirally and scaled according to periodicity in the data. If not known in advance, periodicity may be detected by either animating through different cycle lengths (see a good example at EagerEyes) or by computing a spectrum using Fourier transform (for regularly spaced data) or least squares fitting of sine and cosine functions. The spiral shape and the distance between plotted points are fairly easy to calculate. The example on the left shows daily sunshine intensity, with cloudiness between days easily comparable. The formulae to compute the spiral and scale the plotting are available in the paper below by Weber et al. Some code to create a spiral plot in R using ggplot2 is avaiable.

Flickr Flow

The right hand example – Flickr Flow – combines the spiral graph and stacked graph approaches, showing predominant colours in Flickr Photos by season (summer at top, autumn on right, winter bottom, spring on left).

Calendar Views for daily data points

Here, daily average values or cluster groups are colour coded and displayed in a calendar. The example below is from the d3 javascript site and displays Dow Jones averages from Yahoo Finance.

Evolutionary / Version paths

This type of visualisation has a long history. A stemma codicum is used by philologists (text scholars) to track versions of a manuscript, based on modifications that can be tracked over time from an original version. This example tracks versions of Dante’s Divine Comedy between 1321 and 1355

Semma codicum

A modern equivalent is the visualisation of source code repositories, such as the network viewer on GitHub. The following shows some recent activity on Node.js repositories. Users are shown down the left, with the timeline along the top. Each node is a “commit” to the repository. Connections between repositories via forking and merging changes are shown, along with major releases:

GitHub Visulaisation

References

AIGNER, W., MIKSCH, S., MULLER, W., SCHUMANN, H. and TOMINSKI, C., 2008. Visual methods for analyzing time-oriented data. Visualization and Computer Graphics, IEEE Transactions on, 14(1), pp. 47-60

BYRON, L. and WATTENBERG, M., 2008. Stacked Graphs – Geometry & Aesthetics.
http://www.leebyron.com/else/streamgraph/download.php?file=stackedgraphs_byron_wattenberg.pdf

MAZZA, R., 2009. Introduction to information visualization. London: Springer.

VIÉGAS, F.B., WATTENBERG, M. and DAVE, K., 2004. Studying cooperation and conflict between authors with history flow visualizations. http://alumni.media.mit.edu/~fviegas/papers/history_flow.pdf

WEBER, M., ALEXA, M. and MULLER, W., 2001. visualizing time-series on spirals.
http://ieg.ifs.tuwien.ac.at/~aigner/teaching/ws06/infovis_ue/papers/spiralgraph_weber01visualizing.pdf