AI in stock market: An introduction

Anand James & Gokuldas P. V.

Artificial Intelligence is the common buzzword nowadays. AI systems are designed to mimic or replicate human cognitive functions, allowing them to adapt and respond to various tasks and challenges. Based on the capability of the AI systems, it can be classified into three as:

  • Artificial Narrow Intelligence – Which is designed to perform only specific activities like image recognition software, and recommendation systems.
  • Artificial General Intelligence – Where the system itself can think like a human, and has the ability to understand, learn and apply its knowledge to a wide range of tasks which could be sequential in manner.
  • Artificial Super Intelligence – An AI that surpasses human intelligence in all aspects, including creativity, problem-solving, and emotional intelligence. This is something like SKYNET in the Terminator Franchise of movies.

Currently, all the developed AI systems belong to the group of Artificial Narrow Intelligence systems and efforts are underway to develop models that comply with Artificial General Intelligence Standard. These AI algorithms are again classified based on their applications like:

  • Expert Systems – which are designed to be capable of making decisions in a specific domain in which it is expertly trained. E.g. Medical Diagnosis systems, Financial advisory systems etc.
  • Natural language processing (NLP) systems – which can understand, interpret, and generate human language. AI services like Chatbots, translating services are these kinds of systems.
  • Robotics AI systems – which are integrated with robots to perform tasks that require physical manipulation and movement.
  • Machine Learning – AI systems that learn from data and improve their performance over time without being explicitly programmed. These models are extensively used for prediction, classification and grouping purposes.
  • Computer Vision – These models are explicitly trained to understand visual information like images, videos etc and can be used for purposes like face recognition, object detection, autonomous vehicles etc.

Building Blocks of AI in financial markets

The AI techniques that find most application in financial markets are NLP and Machine Learning. The main question here is how these models are made and how they are capable of learning and understanding like humans? This is where the Neural Network architecture comes in.

Neural Networks

Neural networks can be defined as the basic building block of any artificial intelligence model that makes decisions in a manner like the human brain, which mimics the network of biological neurons and its working. Neural Networks consist of nodes or Artificial Neurons arranged in layers like Input layer, hidden layers and Output layers. Each node relates to others and have specified weights associated with it. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. These neural networks are trained using data, which in turn sometimes identifies the patterns within it, whether the data is tabular, textual, audio or even image or video. It is this training phase that is resource intensive and time consuming, which requires sophisticated Graphic Processing units, enabled with powerful processors.

Feed Forward Networks

Feed Forward Network is an Artificial Neural Network, in which the data flows in one direction only, that is from Input Layer to Output Layer through hidden layers. To generate an output with minimal error, the model needs to be trained repeatedly using the same data repeatedly over several iterations (epochs).

Recurrent Neural Networks

RNNsare a specially designed neural network capable of handling sequential data using a technique known as Backpropagation through time. Here the data flow is not unidirectional.

RNNs have a form of memory that allows them to capture dependencies in the data over time. This makes them suitable for tasks where the order of the data matters, such as language modelling and time series prediction.

Long – Short Term Memory (LSTM) Networks

To overcome the shortcomings of RNN in handling long sequences of data, LSTM was introduced which are equipped with gates which include input, forget, and output gates and special units called memory cells that can maintain information over extended periods.

Transformers

Transformers are the most advanced Neural Network models which use a unique mechanism called attention mechanism that allows the model to selectively focus on specific parts of the input data when making predictions. Transformers are models which use the attention mechanism, particularly Self-Attention mechanism, in which each element of the input sequence is compared with every other element to compute the attention weights. This allows the model to capture dependencies between different parts of the sequence, regardless of their distance from each other. These dependencies are weighed accordingly using an Encoder-Decoder mechanism to provide the output.

Large Language Models

Large Language Models (LLMs) represent a significant leap forward in AI technology and are a key component of the current state-of-the-art in AI. It gained much popularity with the introduction of Open AI’s ChatGPT in 2022.

LLMs are a type of artificial intelligence designed to understand and generate human language. A broad range of language related tasks could be done by LLMs such as translation, summarization, question answering, and even creative writing. They can generate coherent and contextually relevant text based on the input they receive.

LLMs are largely based on the Transformer architecture and are equipped with Natural Language Understanding and Natural Language Processing capabilities.  Open AI’s ChatGPT (GPT4, GPT3.5), Google’s BERT (Bidirectional Encoder Representations from Transformers) etc are transformer based Large Language Models.

Instead of developing and training domain-specific intelligent models for each use case, which can be costly and resource-intensive, using LLMs that can be fine-tuned for specific applications is an innovative and efficient approach.

APPLICATIONS OF LLMs IN STOCK MARKET

Time Series Analysis

Time series analysis is an efficient way to forecast by examining the data collected over time to identify patterns, trends, and make predictions. Time series analysis looks for trends (long-term direction), seasonality (repeating patterns), and cycles (fluctuations over time). It is mainly used for forecasting future values based on past data, such as predicting stock prices or weather conditions.

But as the process is a statistical approach, to fit into the models like ARIMA Exponential Smoothing, and LSTM, the data needs to be stationary which means that the statistical properties of the series (mean, variance) are constant over time. Real-world data often violates this assumption, requiring additional steps like differencing or transformation.

Traditional time series analysis also suffers from complexity in Multivariate Analysis, that is, analysing multiple time series simultaneously (multivariate time series) can be complex and computationally intensive, requiring advanced models and techniques.

Autocorrelation is a key factor that influences time series predictions. Autocorrelation is the correlation of a time series with its own past values, which helps in effective pattern recognition, seasonality detection and improved forecasts. As the stock data lacks autocorrelation factor, it makes it difficult for traditional time series analysis models to make efficient forecasts. Also, several external factors like market sentiments, economic indicators etc. influence the stock market which needs also to be incorporated into the analysis. So, stock market forecasting needs models that have a more comprehensive approach that could incorporate external factors, lack of autocorrelation etc along with historical data to enable them to learn the patterns in stock data.

It is here LLMs have a role to play. Repurposing them to manage time series data will enable them to combine the strengths of traditional time series analysis methods with the advanced capabilities of LLMs to make predictions.

Unlike conventional approaches, such as ARIMA, which often require extensive domain expertise and manual tuning, time series LLMs leverage advanced machine learning techniques to learn from the data automatically. This makes them robust and versatile tools for many applications where traditional models might fall short.

Time series LLMs can make accurate predictions on new, unseen datasets without requiring additional training or fine-tuning. This is particularly useful for rapidly changing environments where new data emerges frequently.

These models often use techniques like multi-scale decomposition to separate and analyse different components of the time series (e.g., trends, seasonality). This helps in isolating and modelling autocorrelation at various scales.

By leveraging the power of LLMs, these models can automatically extract relevant features from the data, including those related to autocorrelation. This reduces the need for extensive manual feature engineering. Time series LLMs can capture complex, non-linear relationships, and patterns in data that traditional statistical models like ARIMA or GARCH might miss, especially for data that has not been seen or pre-processed.

Most popular time series LLMs for forecasting and predictive analytics include Google’s TimesFM, IBM’s TinyTimeMixer, and AutoLab’s MOMENT.

Personalised Portfolios

Personalised Portfolios involve investor specific portfolios tailored to an individual’s financial goals, risk tolerance and preferences. It includes Goal Setting, Risk Assessment, Asset Allocation, investment selection, Monitoring and Rebalancing, Performance Evaluation. Most of these steps could be accomplished easily with the help of LLMs:

  • LLMs could be used for the financial behaviour analysis of the investor by analysing the spending behaviour, investing behaviour etc. Moreover, it can identify common behavioural biases like overconfidence, loss aversion and herd behaviour. By providing more personalized recommendations, it can help investors to make more rational decisions.
  • LLMs can continuously assess an investor’s risk tolerance by analysing their reactions to market changes and their investment decisions over time. They can generate and analyse responses to customized risk assessment questionnaires, providing a more nuanced understanding of an investor’s risk profile.
  • By enabling an LLM chatbot, the investor can interact with it continuously so that the queries, sentiment on the market, investment advice etc could be satisfied and this will make the model to understand the changes in investment behaviour and could adjust the portfolio accordingly. This could also be used in investor behaviour predictions thus helping the portfolio managers anticipate and respond to changes in investor preferences.
  • LLMs have great ability in text summarisation which could be helpful for the investor to break down the investment reports and simplify the technical jargons mentioned in it. Also, LLMs can have input of report from diverse sources and arrive at a comprehensive summary about the stock which will be helpful to the investors.

Market Sentiment Analysis

Sentiment Analysis is a technique of analysing the sentiments of the input texts and classifying them according to their sentiment like Positive, Negative or Neutral. The words in the text are converted to vectors using a suitable model like Word2Vec or GloVe, which labels the sentiments of the word. Then a deep learning model is trained using this sentiment annotated data. A fine-tuned LLM with the available data could be used instead of training a model from scratch, thus easing the process of model training and requiring less cost, processing power and time.

Analysing the sentiment of the financial news and social media interactions in real time could help in understanding the current market sentiment about the stock and combining it with a predictive analysis of the stock price to train a model. This can provide promising results in Stock price prediction.

LLMs can extract valuable insights from unstructured data, such as earnings call transcripts, financial statements, and analyst reports. This helps traders understand the underlying factors driving stock performance.

OTHER STOCK MARKET APPLICATIONS OF AI

  • Pattern Recognition and Predictive Analytics:

Patterns of the stock market price movements are a valuable source of insights in technical analysis. AI can help in analysing vast amounts of financial data and thus makes it possible to identify intricate patterns, and patterns, which may be difficult for humans to find. Thus, these AI systems can be enabled with the actions to be taken if the same pattern is detected again, thus executing the trade at optimal time.  These statistical techniques combined with anomaly detection methods could make valuable forecasts about market movements.

  • High Frequency Trading:

Many orders executed in a short time is High Frequency Trading. AI could be used to execute these orders quickly, assess the risks and adjust the trading strategies instantaneously etc. Thus, trained AI systems could execute an order in a microsecond by scrutinizing the live market data, anticipating stock trajectories, adjusting the trade strategies accordingly and conducting the transactions on the go. By using HFT, AI systems can efficiently exploit the minute price discrepancies and arbitrage opportunities before they are visible to other market participants, thereby capitalizing on small, consistent profits. Also, these systems are capable of handling large portfolios and could adjust the trading positions dynamically to hedge risks and optimize returns based on predictive insights and real-time market conditions.

  • Risk Management: AI can leverage advanced algorithms to assess and mitigate potential risks associated with trading portfolios. These AI models are trained to recognize patterns that indicate potential risks, such as unusual market volatility or deviations in trading activities and can automatically adjust trading strategies to minimize losses. Additionally, AI-powered models support ongoing risk evaluation by consistently monitoring market dynamics and modifying asset allocations, as necessary. By adopting such AI tools, traders can maintain an optimal balance between risk and return, significantly reducing the chances of large-scale financial losses while capitalizing on market opportunities. JPMorgan Chase has adopted AI to enhance trading, risk management, and compliance. They developed LOXM, a machine learning tool that optimizes equity trade execution by analysing historical trade data to minimize market impact and transaction costs.

CHALLENGES

  • Data:
    • AI models require quality data, that too in vast amounts, to fulfil the purpose of model training. By data quality it implies that the data should be unbiased, of large quantity, from various reliable sources to avoid getting trained in a monotonous manner, without noises, properly annotated if it is text data, relevant to time and not outdated etc.
    • Preparing data in such vast quantities for model training is exhaustive and requires manual work intensively.
  • Infrastructure:
    • Deploying an AI model and training it requires computing resources intensively. High end processing units of greater clock speed and performance are needed for the task.
    • Along with the processing unit, memory devices, power supply management, data handling software etc are also required for the smooth functioning of the process.
  • Cost:
    • All the requirements above are costly and require a considerable initial investment.
    • Even after training it requires a lot of finetuning and adjustments to the model to make it reliable and thus commercially viable.
  • Difficulty in availability of pre trained LLM models:
    • Pre-trained models could be an effective alternative to the above-mentioned challenges. But the availability of such pretrained models is difficult.
    • The pretrained LLMs require the user to pay a hefty sum as the user fee for using them in the applications.
    • There could be usage limits and delay in responses if the application uses the LLM from its direct servers.
  • Biases that could be present in training the model:
    • As the models are already trained using data, the user cannot be sure about the integrity of the data. There is no guarantee of unbiased training as it depends on the vested interests of the organisation that owns the AI model.
    • As a result, even after fine tuning, sometimes the model cannot yield desired results as it is biased in its initial training phase itself.
  • Data Privacy Issues
    • As the models are more interactive with the user in cases like behaviour analysis, there is a risk of data leakage if sensitive information is included by chance in the training data or if the model memorizes and reproduces such information during interactions.
    • AI systems, including LLMs, can have security vulnerabilities that might be exploited by malicious actors, leading to unauthorized access to sensitive data.

Trading one asymmetry for another?

Two decades back, you had a definite edge in stock markets, if you were close to the source of info. Over a period of time, technological advancement ensured that this information asymmetry was a thing of the past. Today, for most part, information is secondary to skill and decision making. It could be said that, AI is having the same effect on experience as well, as an AI model that has been trained on the data that counts as experience, can provide equally good, if not better, insights shifting the asymmetry towards the ability to have and maintain an AI model. That said, our opinion is that at present, AI is more of a decision enabler and may not fully replace experience, as it is a contextual adaptation to situations, based on ethics, morality, emotional intelligence, creativity, intuition etc.

The main advantage of humans is adaptability as we can adapt to situations based on the impact or effect of certain outcomes. When an AI model encounters a situation for which there is no training data to bank on, then it will struggle. In such situations, an intelligent agent can also take decisions by interacting with its environment. This is called Reinforcement Learning, but this is at an evolving stage.

AI in Technical Analysis

But within the aforementioned limitations, AI models can be trained to adapt to dynamic market conditions, which may be hard for the human mind to process in real time. For example, Bollinger bands conventionally use 2 standard deviations on either side of a 20-day simple moving average, thus giving support and resistance levels, while also suggesting overbought and oversold signals. However, the applicability of such signals depends on the condition of the market at the point, and one could change the standard deviation depending on prevailing trading range. Another situation can be thought about in the case of RSI, an oscillator whose values range from 0 to 100, with values below 30 pointing to oversold conditions and those above 70 pointing to overbought conditions. However, the applicability of such oversold and overbought conditions depends on whether the asset is an oscillating trend or not. The indicator becomes less insightful in a trending market. An intelligent model can adjust its inferences dynamically by studying the oscillating/directional nature of the trend, while also changing the oversold and overbought extremities dynamically again to get the optimal turnaround point, based on back tested data. 

A sweet problem to have?

The ability to layer in multiple indicators and adjust parameters dynamically is a powerful tool. The challenge again is decision making.  Now, imagine that your intelligent model tells you about a trading set up that has 88% probability of success. Surely, this is a no brainer. Imagine, if the model also adds that there is very low probability that loss incurred could wipe out your entire trading capital. How would you decide on the trade? As models become intelligent, risk management will continue to be key. Decision making will still be a headache humans could continue to bear.

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like