Back

FinText
A Financial Word Embedding



This page contains FinText, a purpose-built financial word embedding for financial textual analysis. Dow Jones Newswires Text News Feed from January 1, 2000, to September 14, 2015 is used for developing these financial word embeddings. This contains millions of news stories (2,733,035 unique tokens) covering finance, economics, politics, etc., from various news agencies worldwide. Also, extensive text preprocessing is applied to ensure this big textual data is empty of redundant characters, sentences, and structures. Four FinText models are available to download containing Word2Vec and FastText algorithms via CBOW and Skip-gram models. For a detailed review of the model specification and their performance in realised volatility forecasting, see this paper. All available models on this page are for Non-Commercial Research Purposes.

Some examples:

The figure below shows the 2D visualisation of word embeddings. For each word embedding, Principal Component Analysis (PCA) is applied to 300-dimensional vectors. The chosen tokens are ''microsoft', 'ibm', 'google', and 'adobe' (technology companies), 'barclays', 'citi', 'ubs', and 'hsbc' (financial services and investment banking companies), and 'tesco' and 'walmart' (retail companies). 'Dimension 1' (x-axis) and 'Dimension 2' (y-axis) show the first and second obtained dimensions. Word2Vec and FastText algorithms are shown in the first and second rows. Google is a publicly available developed word embedding trained on a part of Google News dataset and Wiki news is another publicly available developed word embedding trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset. The Continuous Bag of Words (CBOW) and Skip-gram are the proposed supervised learning models for learning distributed representations of tokens. The expected visualisation for the best word embedding is when tokens in different company groups make clusters. This figure shows that FinText clusters all groups correctly.

Many word embeddings are able to solve word analogies such as king:man :: woman:queen (':' means 'is to' and '::' means 'as'). The table below lists some challenges we posed and the answers produced by the group of word embeddings considered here. 'NONE' indicates that one of the defined tokens is not in the vocabulary list. It is clear that FinText is more sensitive to financial context and able to capture very subtle financial relationships.

Word embedding
Analogy Google Wiki news FinText
debit:credit::positive:X positive negative negative
bullish:bearish::rise:X rises rises fall
apple:iphone::microsoft:X windows_xp iphone windows
us:uk::djia:X NONE NONE ftse_100
microsoft:msft::amazon:X aapl hmv amzn
bid:ask::buy:X tell ask- sell
creditor:lend::debtor:X lends lends borrow
rent:short_term::lease:X NONE NONE long_term
growth_stock:overvalued::value_stock:X NONE NONE undervalued
us:uk::nyse:X nasdaq hsbc lse
call_option:put_option::buy:X NONE NONE sell

We also challenged all word embeddings to produce the top three tokens that are most similar to 'morningstar'. This token is not among the training tokens of Google. WikiNews's answers are 'daystar', 'blazingstar', and 'evenin'. Answers from FinText are 'researcher_morningstar', 'tracker_morningstar', and 'lipper'. When asked to find unmatched token in a group of tokens such as ['usdgbp', 'euraud', 'usdcad'], a collection of exchange rates mnemonics, Google and WikiNews could not find these tokens, while FinText produces the correct answer, 'euraud'.

Citation:
Rahimikia, Eghbal and Zohren, Stefan and Poon, Ser-Huang, Realised Volatility Forecasting: Machine Learning via Financial Word Embedding (July 28, 2021). Available at SSRN 3895272.

Word2Vec/CBOW(5.86GB)

This FinText word embedding is developed based on Word2Vec algorithm and CBOW model.

Word2Vec/Skip-gram(5.85GB)

This FinText word embedding is developed based on Word2Vec algorithm and Skip-gram model.

Python code(3.48KB)

This python code can be used for reading and representing the downloaded word embeddings.

FastText/CBOW(10.80GB)

This FinText word embedding is developed based on FastText algorithm and CBOW model.

FastText/Skip-gram(10.80GB)

This FinText word embedding is developed based on FastText algorithm and Skip-gram model.

© Copyright 2021, rahimikia.com