Knowledge graph - from art to science
Fundamental credit analysis is the bedrock of active fixed income investing and the capability an asset manager wants to showcase. This type of analysis, together with the skills of portfolio managers, traders, and quant analysts, is ultimately what should generate alpha, the part of investment performance investors should be willing to pay for. Yet credit analysis is more an art than a science, and as art, it is subjective, its value is time-sensitive, and it can be hard to digest.
Subjectivity in the face of complexity
Judging the creditworthiness of a company is complex, as it not only requires assessing an issuer’s ability to repay its debt but also its management’s willingness to return borrowed capital.
To determine a creditor’s ability to service its debt, credit analysts build financial models. Those models rely on a limited number of sources, including companies’ financial statements, peer group analysis, industry-specific publications, and sell-side research. While seemingly a quantitative approach, the numerous subjective assumptions required to forecast a company’s financial status easily turn this task into a mere quantification of qualitative factors. Nothing wrong there, but the model will only be as good as the assumptions it relies on.
Determining management’s willingness to repay debt is an even more subjective game. Analysts scrutinise the language used in earnings announcements, meet with senior managemen,t and gauge how truthful a CFO is.
The discounted value of old information
A second challenge with fundamental credit analysis is its reliance on mostly outdated figures. Not only are financial statements weeks if not months old but analysts also require days to digest a company’s announcement, evaluate its impacts on credit risk and find time in the portfolio manager’s schedule to pitch a trade idea. Meanwhile, the market is making up its own mind, swinging the bond’s price so that by the time a fully informed decision is made, much has already been priced in. Most of the elusive alpha is gone.
Less is more
Having started my career as a credit analyst, I appreciate the desire of analysts to showcase the assessment done, all the considerations taken, and all the hard work. Then I moved into a trader role, and the work produced by analysts suddenly appeared too late, and the recommendations were convoluted to digest. After a further move to portfolio management, I realised that less is more and that when managing a portfolio with hundreds of bonds, you physically cannot absorb all the details provided by analysts. To add to the challenge, analysts typically deliver more in-depth research on widely covered issuers, as information is more readily available for those names.
Call for action
Fundamental credit analysis has barely evolved over the past decades. Why change it now? Because of C.O.S.T.:
- Competitiveness
Active asset managers, those relying upon and paying for large fundamental research capabilities, have been suffering from fee pressure for years, driven by the advance of passive solutions like ETFs. While there is little doubt that some managers can beat the market, having heavily invested in their research capability, as a whole, the industry struggles to cover those costs. Hence the need to reduce expenses to deliver investment vehicles that can consistently outperform their benchmark, after fees.
- Opportunity
Cutting costs does not necessarily mean reducing fundamental research capability but re-orienting analysts towards higher-margin opportunities and less correlated markets. This could mean moving away from well-researched investment-grade bonds in developed markets towards less liquid and even private alternatives.
- Scale
The current setup of credit research is restraining. Consider the now second-largest bond market in the world, the Chinese onshore bond market. The investment universe is vast, while the pool of experienced analysts is tiny due to the recent emergence of this market, as well as high barriers to entry because of language. This calls for a more systematic approach to fundamental research.
- Timing
With an explosion in alternative data to generate alpha, from AI-driven sentiment analysis in social media to machine-learning-powered image recognition of satellite pictures and sensory readouts delivered by the Internet of Things, timing becomes essential, and the ability to account for new pieces of information is a competitive advantage.
What ‘good’ looks like
The ideal tool to deliver fundamental analysis would then be objective, fast, and deliver concise recommendations to the decision-makers, be it a human portfolio manager or an algorithm. This system would unceasingly screen out information as it becomes available, ideally via application programming interfaces (APIs) for speed and accuracy. The engine would also continuously learn to adjust the weight of each piece of information collected in terms of its quality, relevance, and impact on market prices. Finally, the analysis should be delivered in a way that can be interpreted by humans, not only for the benefit of decision-makers but also for regulatory purposes, as it provides rationales for each trade.
The obstacles to overcome
The biggest challenge for this type of tool would be to make sense of mostly unstructured data like text, video or audio documents and to organise information so that inferences can be drawn. Furthermore, the relationship between each piece of information should be clarified to resolve ambiguities when language queries are processed, e.g., is an article about ‘equity’ referring to an asset class or a quality.
Enter knowledge graph
A knowledge graph (KG) is a way to organise data. It is not meant to replace traditional databases such as SQL, but complement them by adding semantics to be processed by computers. Neither does it substitute data lakes but instead uses some data points and connects them in meaningful ways.
In contrast to traditional databases, KG not only contains information about a specific entity but also information on the relationship between entities. This relationship links the two entities, creating a so-called triple, and those triples define the graph. A simple example of such a triple could be Tim Cook, CEO of, Apple. Eventually, a knowledge graph could represent the output of thousands of research analysts, reading through hundreds of documents, understanding them, processing them and connecting information across those articles.
Another way to think of KG is as a base of knowledge or the memory of your AI-driven tools. Your KG will continuously gather information, connect every piece, and effectively generate derived knowledge. The relationship created between facts provides context and hence meaning, which makes them understandable by machines. Your AI algorithm can then leverage that information to solve problems and answer questions.
To give an example, the entity ‘iPhone’ could be linked to the entity ‘OLED display’ by the relation ‘contains’, and the entity ‘OLED display’ could be linked to the entity ‘Samsung’ by the relation ‘is produced by’. Thanks to those relationships, an AI-driven news screener could ‘understand’ that a spike in iPhone sales will benefit Samsung’s profitability.
While this simple example might not impress you, add thousands of entities and relationships, and your AI-powered tool might now be able to indicate that the specified size of the demand increase for iPhones will actually outpace Samsung's manufacturing capacity. Instead of supporting Samsung’s profitability, this piece of information concludes that Apple’s sales will suffer from a supply shortage, negatively impacting its bottom line. But the ultimate power of having entities linked by relationships is that once connections are established, patterns can be identified and undetected connections uncovered.
The final step involves translating that information in a way that can be understood by a human, using natural language generation. Of course, both the analysis and its communication are produced as fast as the news is generated, that is, before an analyst has even had a chance to read the article, let alone process it and make any investment recommendation.
From sci-fi, to proof of concept
The technology illustrated above is no wishful thinking. It is already amongst us. Google has been vocal about its use of knowledge graph since 2012. The innovation lies in the idea of utilizing this cutting-edge technology in the asset management space, one of a rapidly expanding range of potential applications. In this industry, fixed income is arguably even better suited as equity to implement this technology. While profit margins are melting across asset classes, they already started from a low base on the bond side. The technicalities of this asset class, as well as the abundance of instruments for each issuer, make KG even more powerful there.
The current research process is outdated, and as we move towards a more scientific approach of fundamental credit analysis, its only artistic aspect will become the shape of the knowledge graph underlying it.
The article was first published on LinkedIn on August 8, 2019.
Share this article
Written by