Large Language Models (LLMs)

  • 详情 Burden of Improvement: When Reputation Creates Capital Strain in Insurance
    A strong reputation is a cornerstone of corporate finance theory, widely believed to relax financial constraints and lower capital costs. We challenge this view by identifying an ‘reputation paradox’: under modern risk-sensitive regulation, for firms with long-term liabilities, a better reputation may paradoxically increase capital strain. We argue that the improvement of firm’s reputation alters customer behavior , , which extends liability duration and amplifies measured risk. By using the life insurance industry as an ideal laboratory, we develop an innovative framework that integrates LLMs with actuarial cash flow models, which confirms that the improved reputation increases regulatory capital demands. A comparative analysis across major regulatory regimes—C-ROSS, Solvency II, and RBC—and two insurance products, we further demonstrate that improvements in reputation affect capital requirements unevenly across product types and regulatory frameworks. Our findings challenge the conventional view that reputation uniformly alleviates capital pressure, emphasizing the necessity for insurers to strategically align reputation management with solvency planning.
  • 详情 Reputation in Insurance: Unintended Consequences for Capital Allocation
    Reputation is widely regarded as a stabilizing factor in financial institutions, reducing capital constraints and enhancing firm resilience. However, in the insurance industry, where capital requirements are shaped by solvency regulations and policyholder behavior, the effects of reputation on capital management remain unclear. This paper examines the unintended consequences of reputation in insurance asset-liability management, focusing on its impact on capital allocation. Using a novel reputation risk measure based on large language models (LLMs) and actuarial models, we show that reputation shifts influence surrender rates, altering capital requirements. While higher reputation reduces surrender risk, it increases capital demand for investment-oriented insurance products, whereas protection products remain largely unaffected. These findings challenge the conventional wisdom that reputation always eases capital constraints, highlighting the need for insurers to integrate reputation management with capital planning to avoid unintended capital strain.
  • 详情 Large Language Models and Return Prediction in China
    We examine whether large language models (LLMs) can extract contextualized representation of Chinese public news articles to predict stock returns. Based on representativeness and influences, we consider seven LLMs: BERT, RoBERTa, FinBERT, Baichuan, ChatGLM, InternLM, and their ensemble model. We show that news tones and return forecasts extracted by LLMs from Chinese news significantly predict future returns. The value-weighted long-minus-short portfolios yield annualized returns between 35% and 67%, depending on the model. Building on the return predictive power of LLM signals, we further investigate its implications for information efficiency. The LLM signals contain firm fundamental information, and it takes two days for LLM signals to be incorporated into stock prices. The predictive power of the LLM signals is stronger for firms with more information frictions, more retail holdings and for more complex news. Interestingly, many investors trade in opposite directions of LLM signals upon news releases, and can benefit from the LLM signals. These findings suggest LLMs can be helpful in processing public news, and thus contribute to overall market efficiency.
  • 详情 The Market Value of Generative AI: Evidence from China Market
    Our study explored the rise of public companies competing to launch large language models (LLMs) in the Chinese stock market after ChatGPTs' success. We analyzed 25 companies listed on the Chinese Stock Exchange and discovered that the cumulative abnormal return (CAR) was high up to 3% before LLMs' release, indicating a positive view from insiders. However, CAR dropped to around 1.5% after their release. Early LLM releases had better market reactions, especially those focused on customer service, design, and education. Conversely, LLMs dedicated to IT and civil service received negative feedback.