large language models

  • 详情 Reputation in Insurance: Unintended Consequences for Capital Allocation
    Reputation is widely regarded as a stabilizing factor in financial institutions, reducing capital constraints and enhancing firm resilience. However, in the insurance industry, where capital requirements are shaped by solvency regulations and policyholder behavior, the effects of reputation on capital management remain unclear. This paper examines the unintended consequences of reputation in insurance asset-liability management, focusing on its impact on capital allocation. Using a novel reputation risk measure based on large language models (LLMs) and actuarial models, we show that reputation shifts influence surrender rates, altering capital requirements. While higher reputation reduces surrender risk, it increases capital demand for investment-oriented insurance products, whereas protection products remain largely unaffected. These findings challenge the conventional wisdom that reputation always eases capital constraints, highlighting the need for insurers to integrate reputation management with capital planning to avoid unintended capital strain.
  • 详情 Dissecting the Sentiment-Driven Green Premium in China with a Large Language Model
    The general financial theory predicts a carbon premium, as brown stocks bear greater uncertainty under climate transition. However, a contrary green premium has been identified in China, as evidenced by the return spread between green and brown sectors. The aggregated climate transition sentiment, measured from news data using a large language model, explains 12%-33% of the variability in the anomalous alpha. This factor intensifies after China announced its national commitments. The sentiment-driven green premium is attributed to speculative trading by retail investors targeting green “concept stocks.” Additionally, the discussion highlights the advantages of large language models over lexicon-based sentiment analysis.
  • 详情 Large Language Models and Return Prediction in China
    We examine whether large language models (LLMs) can extract contextualized representation of Chinese public news articles to predict stock returns. Based on representativeness and influences, we consider seven LLMs: BERT, RoBERTa, FinBERT, Baichuan, ChatGLM, InternLM, and their ensemble model. We show that news tones and return forecasts extracted by LLMs from Chinese news significantly predict future returns. The value-weighted long-minus-short portfolios yield annualized returns between 35% and 67%, depending on the model. Building on the return predictive power of LLM signals, we further investigate its implications for information efficiency. The LLM signals contain firm fundamental information, and it takes two days for LLM signals to be incorporated into stock prices. The predictive power of the LLM signals is stronger for firms with more information frictions, more retail holdings and for more complex news. Interestingly, many investors trade in opposite directions of LLM signals upon news releases, and can benefit from the LLM signals. These findings suggest LLMs can be helpful in processing public news, and thus contribute to overall market efficiency.
  • 详情 The Market Value of Generative AI: Evidence from China Market
    Our study explored the rise of public companies competing to launch large language models (LLMs) in the Chinese stock market after ChatGPTs' success. We analyzed 25 companies listed on the Chinese Stock Exchange and discovered that the cumulative abnormal return (CAR) was high up to 3% before LLMs' release, indicating a positive view from insiders. However, CAR dropped to around 1.5% after their release. Early LLM releases had better market reactions, especially those focused on customer service, design, and education. Conversely, LLMs dedicated to IT and civil service received negative feedback.