Tomorrow’s news today

By Brett Drury (Scicrop) and Sai Abinesh (NUIG)

Throughout history humanity has had the urge to predict the future. The Greeks consulted the Delphi Oracle, whereas the Romans inspected sheep entrails and modern day sages poke around tea leaves to get the skinny on the future. This desire to predict the future has found its way into finance where modern day Haruspices pop up on television to make confident boasts about the future direction of the share du jour. All, but the very fortunate of these modern day prophets fail at their impossible task.  

In recent times there has been an increase in attention from the research community to develop techniques that can partially remove the veil from the future. Complete and accurate forecasting of the future is more or less impossible, however there is a technique that makes the prediction of a small number of future events feasible. This technique is called causal entailment. Causal entailment also known as logical consequence is a technique borrowed from philosophy where one statement is the natural or logical consequence from another. This technique can be applied to news, and particularly events described in news. An example of causal entailment is that the start of a football match entails the completion or the end of said football match. Causal entailment may allow the prediction of a future event based upon current events.

Causal models can be produced for events from causal relations in text. Causal relations are simply expressions of causality in text, such as: "Sydney storm causes widespread blackouts". The relation shows a causal relationship between a storm, and the consequential blackout. A simple causal model created from this text could be that if there is a storm then it could be inferred that there will be a blackout some time later. The failure in this model is that from personal experience we know that not all storms cause blackouts. Consequently the relationship between storms and blackouts will be probabilistic.

This pattern based approach for extracting causal relationships from text is often referred to as lexical causative relation extraction, and it is typically based upon identifying causal connections between cause and effect events. The main types of causal connections are causal verbs, conjunctions and adjectives. The aforementioned Sydney storms example relies upon the causal verb to provide the causal link between storms and blackouts. Causal conjunctions use words such as because to link together cause and effect. Adjectives such as due can provide a causal link. Causation is often not a simple one to one relationship and this is reflected in causal relations in text. Conjunctions can provide a way of representing a situation where one cause is dependent upon another cause to create an effect. The phrase rain and poor visibility caused accidents on the motorway, infers that the combined effect of rain and poor visibility caused accidents, and that if one of these conditions were missing then the effect event (accidents) would not occur. Independent causes can be modelled in text also using conjunctions, such as or, as demonstrated in the phrase: rain or poor visibility caused accidents. In this case rain and poor visibility are independent causes and therefore only one cause event has to occur to cause accidents. Additionally multiple effects may flow from single or multiple causes in phrases. This is demonstrated in the phrase: rain caused flooding and accidents on the motorway.

Causal relations can assist in the determination whether the effect is beneficial or detrimental through either the sentimental analysis of adjectives in the effect event or the resultative causal verb. A resultative causal verb contains some of the effective event. The determination of the sentiment orientation of an effect is shown in the phrase: excess rain caused the failure of the harvest, where failure provides the negative orientation of the effect event. It is possible therefore to infer from the phrase excess rain that it has a negative consequence. We refer to these as sentiment indicators. The use of sentiment indicators may offer advantages over traditional sentiment analysis because it makes an inference of the future sentiment of the indicator word or phrase rather than the current sentiment orientation of opinion words or phrases. In time-dependent domains such as stock trading, this may allow early proactive actions rather than later reactive actions.

An alternative to lexical approaches is to identify causal relationships  through co-occurring events in time-series data. There are a number of techniques which have been  borrowed from econometrics such as Granger Causation that can identify a causal link between events that are separated in time. Granger causation has been used successfully to link Twitter Sentiment to stock price movements. This is the approach we took at the National University of Ireland Galway (NUIG) in Sai’s Masters thesis to predict the next day’s news. In this project we looked at topics in news and found causal links between topics, and where there were spikes in frequency of a given cause topic we would make a prediction of an appearance of the effect topic at a given point in time in the future.

We choose topics because they can represent dominant themes in a document or collection of documents, and can be identified with statistical techniques, such as Latent Dirichlet Allocation (LDA). Topic models are described as a distribution of abstract topics, each containing words based on their likelihood of occurrence. A graphical representation of an example topic model can be found in the illustration below.

Sai’s research produced a time series graph for each topic which used plotted topic frequency against time. This graph was generated from the Signal Media News Dataset. Time series graphs were compared with the appropriate time lag to see if a spike in frequency of one topic produced a spike in frequency of another topic. This association was found with Granger Causation. Pairs of topics where an appearance of one topic caused another were called causal pairs. These causal pairs will allow the estimation of the dominant themes in the next day’s news.

Every column corresponds to a document, every row to a word. A cell stores the frequency of a word in a document, dark cells indicate high word frequencies. Topic models group both documents, which use similar words, as well as words which occur in a similar set of documents. The resulting patterns are called "topics". In this case, Topic 1 is highlighted in red, based on term similarity.

Sai’s project in future news prediction using causation is not unique, and there have been a number of attempts to predict news based upon causal relationships. Radinsky and Horvitz, for example, attempted to predict events in the near term by identifying sequences of events. They provide an example of a sequence where a drought followed by storms would lead to a rise in Cholera. Their approach is based upon tracking entities in news stories, and computing a causal relationship between news stories using Bayes Rule. Another approach to find causal relations between events is to use classification, and this is the approach proposed by Kunneman and Bosch. They predicted player transfers for Dutch Football based upon tweets from soccer matches.

Causation can be shown graphically in neuron diagrams, although these diagrams have their limitations, they can show clearly the flow of causation in a closed system. In the example diagram the filled nodes are neurons that have fired, whereas the unfilled node demonstrates an unfired node. The arrows show the flow of causation through the system.

The graph structure of the neuron shows a clear way of reasoning within a causal system. This graph structure can be replicated within a probabilistic reasoning system known as a Bayesian Network. In this model the nodes are random variables, and therefore a probability can be estimated for a node state based upon the state of other variables in this system. This is the approach that we taking at Scicrop in a project sponsored by FAPESP where we are building Bayesian Networks from text to represent a causal system for individual crops in Brazil. We reason with these models about the impact of future events on Brazilian agriculture based upon evidence gathered from macro and local information sources.

The use of cause and effect to reason about problems, and make inferences about the future is proposed by some well respected figures as a way of building intelligent machines. The attempts described in this brief blog post only scratch the surface of future event prediction techniques which are based upon causal inference from current knowledge. It should be reiterated that even with perfect knowledge and unlimited computing power it is impossible to perfectly predict the future. The described techniques may however provide a limited insight into the future.

About

Brett Drury is the Head of Research at Scicrop a Startup located in Sao Paulo, Brazil. He has a PhD in Computer Science from the University of Porto, Portugal, and a Law degree from Plymouth University, which is located in the United Kingdom. He was a Senior Research Fellow and Adjunct Lecturer as well as Sai’s Masters thesis supervisor at the National University of Ireland Galway. He can be contacted at: Brett.Drury@scicrop.com

Sai Abinesh is a PhD Candidate at the National University of Ireland Galway and a researcher on the ROCSAFE project which is supported by a grant from the H2020 program. He is currently undertaking research in deep learning and other machine learning algorithms for image and sensor data analysis. He has a Masters in Data Analytics from NUIG, which was obtained through the completion of the thesis in Future News prediction, which is referenced in this article.

Previous
Previous

Writing a Doctoral Thesis Differently for Social Justice

Next
Next

Collecting mobile application usage data