David Stolin – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Wed, 19 Apr 2017 12:06:26 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/cropped-DC-logo-emblem_multicolor-32x32.png David Stolin – Dataconomy https://dataconomy.ru 32 32 Investing, fast and slow – Part 2: Investment for Data Scientists 101 https://dataconomy.ru/2017/04/12/investing-fast-slow-investment-data-scientists-101/ https://dataconomy.ru/2017/04/12/investing-fast-slow-investment-data-scientists-101/#respond Wed, 12 Apr 2017 07:30:59 +0000 https://dataconomy.ru/?p=17677 Financial markets offer countless ways of making (or losing) money. A key distinction among them is the investment horizon, which can range from fractions of a second to years. Walnut Algorithms and Global Systematic Investors are new investment management firms representing the high-frequency and low-frequency sides, respectively. I sat down to talk with their founders […]]]>

Financial markets offer countless ways of making (or losing) money. A key distinction among them is the investment horizon, which can range from fractions of a second to years. Walnut Algorithms and Global Systematic Investors are new investment management firms representing the high-frequency and low-frequency sides, respectively. I sat down to talk with their founders about investing, data, and the challenges of starting up. Part 1, my interview with Guillaume Vidal, co-founder and CEO of Walnut Algorithms ran last week. Below is my talk with Dr. Bernd Hanke, co-founder and co-Chief Investment Officer of Global Systematic Investors.

What is the origin of Global Systematic Investors?

Bernd Hanke: It came from all of our backgrounds. I did a PhD in finance and then worked for two systematic asset managers. In other words, managers who use systematic factors in order to do individual stock selection, quantitatively rather than using human judgment. Obviously, human judgment goes into the model when you select factors to forecast stock returns, but once you’ve built your model, the human element is reduced to a necessary minimum in order to try to remain disciplined. So that was my background. Both of my partners used to be in a portfolio management role at Dimensional Fund Advisors and one of them has always been very research-oriented. They both come from the same mindset, the same type of background, which is using systematic factors in order to forecast asset returns, in our case, stock returns.  

How has your strategy evolved over time and how do you expect it to evolve in the future?

BH:  We’ve worked on the strategy for quite some time, building the model, selecting the factors, working on the portfolio construction, on basically how you capture the systematic factors in an optimal, risk-controlled manner that is robust and makes intuitive sense. We developed the model over several years and we will keep enhancing the model as we continue to do more research. We are not making large changes frequently, but we gradually improve the model all the time, as new academic research becomes available, as we try to enhance some of these academic ideas, and as we do our own research.

There is a commonly held view that in today’s markets, investment strategies are increasingly short-lived, and so they stop working quickly. You don’t share this view?

BH:  We are using a very low frequency model, so the factors we are using have a fairly long payoff horizon. I think when you talk about factors having a relatively short half-life in terms of usability, that is mostly true for higher frequency factors. If you back-test them, they sometimes look like there’s almost no risk associated, just a very high return, and then obviously as soon as people find out about these factors that almost look too good to be true, the effects can go away very quickly. Instead, we are looking at longer-term factors with a payoff horizon of several months or sometimes even a year. We recognize that there’s risk associated with these factors, but they have been shown to be working over long periods of time. In the US you can go back to the 1920’s studying these factors because the data is available. In other regions, there’s less data, but you have consistent findings. So as long as you are prepared to bear the risk and you diversify across these long-term factors, they can be exploited over long periods of time.

What kind of long-term factors are we talking about?

Our process is based on a value and a diversification component.  When people hear “value”, they usually think about a book-to-price ratio. That’s probably the most well-known value factor. Thousands of academics have found that the value effect exists and it does persist over time. It has its drawdowns, of course, the tech bubble being one of them, and value actually worked very poorly, but then value came back strongly after the tech bubble had burst. We’ve broadened the definition of value. We also use cash flow and earnings-related factors, and we are using a factor related to net cash distributions that firms make to shareholders.

We are also using a diversification factor. We are targeting a portfolio that is more diversified across company sizes and across sectors than a market weighted index.

And the advantage of being more diversified is lower volatility?

BH:  Not necessarily. Stock-level diversification actually increases volatility because you’re capturing a size effect. You’re investing in smaller companies than a market-weighted index would. But smaller companies are more risky than larger companies. So if you tilt more towards smaller stocks you actually increase the risk, but you also increase returns. On the sector side, the picture is quite different. By diversifying more across sectors than the market-weighted index does, you get both lower risk and higher returns.   

Does the fact that your factors are longer-term and riskier mean that it could take you longer to convince an outside observer that your strategy is working?

BH:  Yeah, that’s true. That’s one of the luxuries that high frequency funds have given that their factors have such a short payoff horizon. They only need relatively short periods of live performance in order to demonstrate that the model works, whereas someone who uses a lower frequency model needs a longer period to evaluate those factors.

So what are the benefits of going with such a slow-trading strategy compared to a fast-trading strategy?

BH:  One big advantage is of course that these long-term factors have a much higher capacity in terms of assets that you are able to manage with these factors. It is more robust, in the sense that even if liquidity decreased and transaction costs increased, it wouldn’t really hurt the performance of that fund very much because the turnover is so low. Whereas for high-turnover, short-term strategies, transaction costs and liquidity are obviously key, and even slight changes in the liquidity environment of the market can completely destroy the performance of these strategies. Another advantage related to that is that with lower frequency factors you can also go into small capitalization stocks more. You can tilt more towards small cap because you’re not incurring much turnover even though small cap is more costly to trade. And in small cap there are often more return opportunities than in large cap, presumably because small cap stocks are less efficiently priced than large cap stocks.  

Once you settled on your investment strategy, was it obvious to you how you would monetize it, that you would go for the fund structure that you have today?

BH:  The fund we have now is a UCITS fund. We were looking at different legal structures that one could have. It also depends a little bit on who you want to approach as a client or who you might be in contact with as a potential client. If you’re talking to a very large client for example, they might not even want a fund. They might want a separate account or they may have an existing account already and then they appoint you as the portfolio manager for that account. So then the client basically determines the structure of the fund. If it’s a commingled fund as ours, then there are a couple of options available. Some are probably more appealing to just UK investors and some are more global in nature. The UCITS structure is fairly global in nature. It tends to work for most investors except for US investors who have their own structures that differ from UCITS.

What would be your advice to people who think they have a successful investment strategy and are thinking about setting up their own fund?

BH: Well, my advice would be, find an investor first. Ideally, a mix of investors. So if one investor backs out, then you have someone else to put in. That’s obviously easier said than done. But I think that this is quite important.  

How dependent is your strategy on getting timely and accurate data?

BH: For us, timeliness is not as crucial as for high frequency strategies. Obviously, we want to have the latest information as soon as possible, but if there was a day or perhaps even a week delay in some information coming in, it wouldn’t kill our strategy.  

But data accuracy is very important. Current data that we get is usually quite accurate. The same cannot necessarily be said about the historical data that we use in back tests. In the US, data is fairly clean, but not for some other countries. All of the major data vendors claim that there is no survivorship bias in the data. But it’s hard to check, and accuracy is often somewhat questionable for some of the non-US data sources in particular. We’re not managing any emerging markets funds, but even in developed markets going back, there tend to be many problems even for standard data types such as market data and accounting data.

And the data sources that you are using now are mostly standard accounting data?

BH:  Yes. There are some adjustments that we could make and that we would like to make. For example, one fairly obvious adjustment would be to use more sector-specific data. If you are just thinking about a simple value factor which some people measure as book-to-price, it’s basically looking at the accounting value of a company relative to the market value of the company. You could call the accounting value the intrinsic value of the company. You could measure that differently for different industries. For example, if you think about the oil and gas industry, you might want to look at the reserves that these companies have in the ground rather than just using a standard book value. For metals and mining companies, you could do something similar. Other industries also use other sector-specific data items that could be relevant for investors. Most accounting data sources now incorporate quite a lot of sector-specific data items. One issue is that the history is usually not very long. So if you want to run a long back test using sector-specific data, that is usually not feasible because that type of data has typically only been collected over the last few years.

What role do you see for data science and data scientists in investment management now and going forward?

BH: Right now there is a huge demand for data scientists. That, however, is mostly in the hedge fund area. It is much less for long-only funds. We are managing a long-only fund. There are some quantitative asset managers, that manage both long-only funds and hedge funds, and they might be using a similar investment process for both. So these managers may hire data scientists even to work on the long-only portfolios, but it’s mostly systematic hedge funds and it’s mostly the higher frequency hedge funds. Different people refer to “high frequency” in very different ways, but what I would call “high frequency” would be factors with a payoff horizon of at most a couple of days, maybe even intraday factors. So those types of hedge funds seem to be the ones hiring the most data scientists at the moment.  Also, new service providers keep popping up that employ data scientists and they then sell services to hedge funds, such as trading strategies or new types of data sets.

How valuable are these non-standard or “alternative” data sources?

BH:  The data is there and we now have the computational power to exploit it. So I think it will become more useful, but it’s a gradual process. Everybody talks about big data, but I think right now only a small minority of funds have successfully employed non-standard or unstructured data sources (commonly labeled “Big Data”) in their strategies in a meaningful manner. For some types of non-standard data, I think there there’s an obvious case for using it. For example, credit card payment data can help you see whether there are particular trends that some companies might be benefitting from in the future, or looking at the structure of the sales and trying to use that in forecasting, and so on. And there are other data types where it’s probably more doubtful whether the data is useful or not. There is some tendency at the moment, I think, to be over-enthusiastic in the industry about new data without necessarily thinking carefully enough about formulating the right questions to investigate using the data and doing thoughtful data analysis.

Where do you see investing heading, in terms of passive versus active strategies?

BH:  One trend is away from traditional active. Most institutional investors have come to the conclusion that traditional fundamental active long-only managers have underperformed. So, many institutional investors have moved to passive for their long-only allocation, or if not passive, then to what is often referred to as “semi-passive” or “smart beta” strategies. These are mostly one-factor strategies, where the assets, often in an ETF, are managed according to one factor such as a value factor. For example, fundamental indexing uses a value factor composite and that is the only factor. There are other strategies, such as minimum risk and momentum. Everything that is not a market weighted strategy is active, strictly speaking, but often investors refer to strategies that use fixed rules that are made publicly available to every investor as semi-passive.

And then at the other end of the spectrum, you have hedge funds, and it used to be the case that systematic or quantitative fund managers, both long-only as well as long/short managers, mostly used similar factors. That became very apparent in August 2007 during the “quant liquidity crunch”. Basically what happened was that most quantitative investors were betting on the same or very similar factors, and once more and more quant investors had to liquidate their positions, that caused the factors to move against them in an extreme manner. So most quant factors had huge drawdowns at the beginning of August 2007. Then after 2007-2008, hedge funds attempted to move away from these standard factors to more proprietary factors as well as to non-standard data sources, and at the same time more and more data became available. I think systematic strategies used by many hedge funds now are actually more different than they used to be in 2007. However, the opposite might be true for many smart beta strategies. So, hedge funds are often trying to limit their portfolios’ exposures to standard factors used by the smart beta industry. Whether they are able to do this successfully remains to be seen. If there is going to be another quant crisis, that might be the acid test.

So that’s been a fairly significant change over the last 10 years.  If you had a crystal ball, what would be your prediction of how things will be different 10 years from now?

BH:  One prediction I would make is that smart beta is not going to remain as simplistic as it often is at the moment. Most likely, it will be developed into something that we had before 2007 in quant strategies. People will probably combine fairly well-known smart beta factors like value, momentum, low risk into multi-factor strategies rather than offering them separately for each factor and so that then investors have to combine the strategies themselves to diversify across factors. It is more efficient if the investment manager combines factors at the portfolio level because these factors, to the extent that they have low correlation, often partially offset each other. This means that trades based on different factors can be netted against each other and this saves trading costs. That is happening to some degree already. Several managers have started offering multi-factor smart beta portfolios.

On the hedge fund side, I think the prediction is going to be more difficult. It remains to be seen how successful artificial intelligence and machine learning strategies turn out to be, and it also remains to be seen to what extent new data types are exploitable in terms of predicting subsequent stock returns and risk. My suspicion is that there are going to be many disappointments. Some new data types will be worthwhile but many probably won’t be. Similarly for machine learning and artificial intelligence. It is likely that only a small subset of today’s tools turn out to be useful.   

Do you see fintech companies making headway in investment management, either as asset managers or as suppliers to the industry?

BH:  Oh, definitely, on all sides. Robo-advisors being one of the big ones, I guess, that could change a lot how the asset management industry operates. And it’s in all areas, also other service providers, portfolio analytics providers and so on. There’s a lot of development in this area currently, which is probably a good thing. In terms of data vendors, for example, there is still a strong oligopoly consisting of Thomson Reuters, FactSet, Bloomberg and S&P who sometimes charge inflated prices for their data. And the data often isn’t particularly clean. Even worse are some of the index providers like MSCI, FTSE and S&P. They are offering very simple data at exorbitant prices. They are not really charging clients for the data. Instead they are charging them for usage of their brand name, for example, for the right to use the MSCI name in their marketing material. Now there are more and more fintech companies that are offering the same service, except for the brand name, at much lower cost to the client.

Like this article? Subscribe to our weekly newsletter to never miss out!

Image: Michael Dunn, CC BY 2.0

]]>
https://dataconomy.ru/2017/04/12/investing-fast-slow-investment-data-scientists-101/feed/ 0
Investing, fast and slow – Part 1: The Present and the Future of AI in Investment https://dataconomy.ru/2017/04/05/investing-fast-slow-ai-investment/ https://dataconomy.ru/2017/04/05/investing-fast-slow-ai-investment/#respond Wed, 05 Apr 2017 07:30:58 +0000 https://dataconomy.ru/?p=17673 Financial markets offer countless ways of making (or losing) money. A key distinction among them is the investment horizon, which can range from fractions of a second to years. Walnut Algorithms and Global Systematic Investors are new investment management firms representing the high-frequency and low-frequency sides, respectively. I sat down to talk with their founders […]]]>

Financial markets offer countless ways of making (or losing) money. A key distinction among them is the investment horizon, which can range from fractions of a second to years. Walnut Algorithms and Global Systematic Investors are new investment management firms representing the high-frequency and low-frequency sides, respectively. I sat down to talk with their founders about investing, data, and the challenges of starting up. Below is my talk with Guillaume Vidal, co-founder and CEO of Walnut Algorithms. Stop by next week for Part 2, my interview with Dr. Bernd Hanke, co-founder and co-Chief Investment Officer of Global Systematic Investors.

Why did you call the company “Walnut Algorithms”?

Guillaume Vidal:  Because walnuts look like small brains, and from a startup perspective, it was fun, like Apple and Blackberry. It also shows that we were a bit like a walnut tree created out of intelligent algorithms and we felt that it’s important to put “algorithms” on the back of that. So we thought that mixing “walnut” and “algorithms” was fun and it was a good image.

And how would you summarize what you do?

GV: We apply the latest advances in artificial intelligence to systematic investment strategies.

Was that the idea from the beginning, or did you pivot at some point?

GV: I think it came quite naturally. The six co-founders’ backgrounds in artificial intelligence, investment management and finance made us think there’s definitely something to do there. We looked at a lot of AI startups and many of them wonder what they should be doing with AI, as we realized. One of the best AI startups in France is called Snips and even they had a hard time coming up with a product. We focused from the outset on financial services and investment management and that for us was very amenable to AI. We did take a bit of time to find the right business model, which for us now is actually managing capital and advising capital depending on the regulatory definition. But in the beginning it was really a bit naïve saying, “okay, we want to apply AI”. We want to do what DeepMind did with their reinforcement learning or AlphaGo. They are incredibly powerful algorithms and we want to apply that to investment management, and these are algorithms that are four, five years old at most. They are made possible by improvement not only in the AI but also in access to data, coding languages with the right libraries, as well as computing power via Google Cloud or Amazon cloud. It’s a sort of a mix of things that allows this. I would say the most difficult, maybe the luckiest part for us was to be able to have that combination of skills. I think that the biggest barrier to entry is actually that combination of AI, computer science, quantitative finance and business skills.

What financial instruments do you focus on?

GV:  We focus on liquid equity index futures in the US and Europe, because we need both liquidity and low trading costs, you can go long or short without extra cost.

How did you look for your business model?

GV: We started by saying that AI for finance works. It will work. There is no doubt about that. The question is, who is going to make it work? And it will be very, very valuable. If it’s just research, if it’s just one big sale kind of product or if it’s research fees from selling into hedge funds, or you partner with a hedge fund, would you be absorbed by a hedge fund, would you provide signals, would you do consulting work, would you create your own hedge fund, all of those were potential business models that we looked at. When we applied to Startup Bootcamp and when we went through the selection stages we were actually telling them that we didn’t really have a business model and they were fine with it.  Now we are starting with separate managed accounts. This is quite standard in finance. A lot of CTAs do that. The fund structure might be something at a later stage, as it involves heavier compliance and regulatory issues, and is costlier and time consuming.

So what is innovative about generating trading signals with machine learning?

GV: Traditional systematic strategies are rule-based. A systematic strategy that you would code, for example, on Quantopian, you would say, “oh, if these three moving averages cross and I’ll have my yearly pivot at this level, or if my relative strength index is above a particular threshold, then I buy or sell”. And these are fixed rules. What we’re building, is a machine that does not have fixed rules,they are more flexible. A machine learning algo can continuously evolve and actually look at market configurations, classifying buy or sell signals with confidence levels. It’s a bit like a trading floor where you have a number of traders, and in our case it’s a number of robo-traders which are individual AI algorithms, and we have a portfolio manager which is the cash allocator which uses those underlying signals provided by the different AI algos and optimizes the capital to allocate to those individual signals based on the risk constraints and the exposition constraints, long and short and per instrument, per geography et cetera.

It seems that your clients have to be sophisticated enough to appreciate what you do but not so sophisticated that they can do it themselves.

GV: There’s more than 80,000 funds worldwide. Of course a portion of that is interested in it and the people we talk to are even hedge funds themselves. But sometimes they just have a global macro strategy or a credit strategy or some other form of non-systematic strategies. I would say that internal quant teams sometimes are not necessarily staffed enough to do what we’ve been doing.

We coded everything from A to Z with 12, now reaching 15 people soon, all scientists, and we have to code the entire infrastructure and we have to do research, we have to do all of that. A number of those more traditional funds, sometimes they will hire one PhD and say, “let’s let him work on one problem and let’s let him try to enhance one of our systems with machine learning”. It doesn’t necessarily work because maybe you need a collaborative and creative culture, often found in startups, rather than just having one PhD doing some data science on the side working in collaboration with one of their quants. We really work in a tight group, brainstorming all the time, bringing computer scientists, mathematicians, AI scientists, all these skills together to think what actually works, what should work, how should we code this, how should we design this. It requires an innovation mindset.

Established hedge funds have been running with their own systems for 20 years maybe and they have their strategies, long-term systematic or long-term trend-following or whatever, and coming up with something completely new, hiring new people, bringing in internal research in-house is difficult. Some try it. I would say the most sophisticated succeed and these are hedge funds like Renaissance Technology, Two Sigma, Winton. It’s very opaque, we don’t necessarily know exactly who’s doing what, but probably they have some of that.

And these hedge funds’ algorithms will interact with yours in the markets. Do you have a line of defense against that?

GV:  I think there are two main things. One is that for now we’re a lot smaller, and we don’t necessarily focus on the same asset classes. The larger ones have to be in very deep, very liquid markets. These funds have very different investment strategies on multiple timeframes so they can invest from high frequency to yearly trend following, very probably. When you have 60 billion under management, you have no choice but to actually scale to every asset. As we have very minimal assets under management to begin with, we create intraday strategies on specific assets.

The second part is this. When you look at all the systematic trend following CTAs, they typically have an 80 to 90 percent correlation, because they’d be following the same trends on the same weekly and monthly basis. When you start using more complex machine learning strategies, there are many, many ways to actually do machine learning. And we think in modules, so we have our data gathering, data cleaning, feature engineering, entry points, exit policies, we have allocation, and we have market impact – and all these for us are machine learning enhanceable, and machine learning automatable, and there are so many ways to do it, so you end up with a very different system than they have. We come up with some new ways of investing, some signals that we come up with are not the signals that everyone has. It’s not a golden goose. It’s not like you created a machine that just makes money. It has a risk adjusted return, it has drawdowns, it has inherent risk, but from the portfolio management strategy, it does outperform some of the other absolute return strategies, and it is uncorrelated to them. That’s the part that people are interested in.

Do you worry about overfitting your models, so they work for the time periods you used for model development, but not afterward?

GV:  Trying to minimize overfitting as much as possible is really at the core of what we do. There are many ways. First of all, there is data dimensionality, so this is why we are intraday, and we try to have as many data points as possible. When we do our classification, we try to minimize feature vectors so that’s really about trying to reduce the input dimension, and using human expert knowledge is important in that regard. We also do a lot of robustness tests, we designed robustness modules. And we paper trade as well, before it goes live. But there’s always overfitting in a sense.  Because you use historical data and you fit your models on historical data, overfitting is there. Some is useful as you have to make sure the algos are actually fitted to the current market regime, but they have to generalise.

Do your algorithms recognize when the regime has changed or do you need humans for that?

GV:  Yes, we automate that. We try very hard to automate that at multiple levels of the decision making, in the allocation portion, in the entry signal portion. So maybe the underlying algo itself understands that the market has changed and gives us higher or lower confidence on its signal. But on the allocation as well, maybe you say, “that particular algo sent me that particular signal but I’m going to discard it because they are not in the right regime”. So at multiple levels we can actually take into account regime changes. There is no human intervention unless there’s something very critical, a big financial crisis or a big flash-crash, and we might decide, the algo probably won’t work right now and we should shut everything down.

Do you see investment management becoming dominated by AI in the future?

GV:  It’s difficult to see the future, but portfolio managers or heads of hedge funds will probably switch from being traders, economists, business guys into data scientists, mathematicians, into people who are capable of using data, understanding data and managing teams of scientists and teams of engineers. Since AI is becoming more accessible, data is becoming more accessible, computing power is becoming more accessible, you’re probably going to have firms like us coming and disrupting the larger hedge funds out there and they will have to, in a sense, defend their position against those players, or buy them out, or find a way to innovate themselves, because currently they are not really doing it.

Do you think that somewhere down the road, AI for investments could become commoditized?

GV:  AI is not automatic, AI is not a monolith. It’s not one big “I do AI”. I don’t see it becoming something completely commoditized. It’s not like “I have an AI algo and I’ll plug it into data and then it works.” It’s a lot more complicated than that. You have to do a lot of feature engineering, you have to have trading experience, market experience, there are many different parameters and many different ways of doing it. Maybe you’ll have some form of commoditization, for example Quantopian managed to commoditize in a sense the way of writing a systematic algorithm in a platform and it has attracted a lot of people. But maybe someone who uses a different platform will have an edge over people who are all using the same platform with access to the same features and the same data.

This brings us back to the ideal team composition for AI trading.  

GV: You need people with trading experience, data scientists, computer scientists. The infrastructure, code optimization, the execution, for all that you need strong IT people. Data science and AI is more or less the same for us, but there is a difference between an AI practitioner and an AI researcher. So a data scientist knows how to code, how to use machine learning libraries, but a researcher can understand the real underlying principles of a neural network and maybe he will work on getting a better cost function and these kinds of things that are not a data scientist’s job.

And what happens when these people with their different backgrounds disagree on how to move forward?

GV:  That’s huge. I think that’s what makes us what we are, having a team of people who are open minded and capable of just debating all day long, and the best idea wins. It’s creativity management. It’s trying to get all these people to disagree in the beginning and agree in the end. And also to agree on what to prioritize, because we always have a pipeline of ideas that could take a thousand people a hundred years to implement, but we have to decide, what’s the low-hanging fruit? What can we do right now to improve the results as much as possible?  And then you also have the more technical guys who say, “okay, I can code this”, or “it’s too long to code this”, or “how should we code this?”

How do you feel about non-traditional data sources, big data?

GV:  We make a distinction between AI and big data, and people tend not to. AI is a way to let you make sense of big data. But we focus on the improvements that AI made.  When, for example, Google came up with AlphaGo or the Atari games, these are really algorithmic improvements. It’s small data sets or fairly limited data sets, but the improvement is really in the AI itself. We focus on strong AI rather than on alternative data sources. One of the reasons is data dimensionality issues that I mentioned. We’re looking for statistically robust strategies.

There is a lot of demand for data scientists in other industries. How do you attract data scientists to work in finance?

GV:  First of all we market ourselves as a technology company, and all the successful firms and funds that do that, do the same. If you look at the marketing of Two Sigma, Winton, or Renaissance Tech, they really say “we are a technology company, a research company, that happens to be trading”, and this is very important to attract the right people. If you are just another hedge fund, mainly because of the crisis and because of the reputation of the hedge fund industry, people don’t really want to work there. But the work in-house is actually quite interesting. You’re working on very complex datasets. You’re researching, and there’s a very straightforward application. The results are right there, black and white. When you optimize code, do some data science on new data sets, new strategy, new markets, new instruments and do that work day to day, it’s actually quite interesting, maybe even more interesting than doing that in a media company.  On the long-term perspective, let’s say five to ten years’ vision, we would like to expand to other areas. Renaissance Tech, which is a New York-based hedge fund, is considered one of the best theoretical physics labs in the world, and similarly we would like Walnut to be one of the best AI labs in the world.

 

Like this article? Subscribe to our weekly newsletter to never miss out!

Image: fhir.phohtograph, CC BY 2.0

]]>
https://dataconomy.ru/2017/04/05/investing-fast-slow-ai-investment/feed/ 0
What are banks telling their investors about FinTech? https://dataconomy.ru/2016/12/02/banks-telling-investors-fintech/ https://dataconomy.ru/2016/12/02/banks-telling-investors-fintech/#respond Fri, 02 Dec 2016 08:00:57 +0000 https://dataconomy.ru/?p=16929 Interest in fintech disruption is at an all-time high, but who will be the winners and who the losers is far from clear. Banks themselves have been sending mixed messages. As a particularly high profile example, JP Morgan Chase CEO Jamie Dimon famously raised the alarm when he said in 2014 that fintech challengers “all […]]]>

Interest in fintech disruption is at an all-time high, but who will be the winners and who the losers is far from clear. Banks themselves have been sending mixed messages. As a particularly high profile example, JP Morgan Chase CEO Jamie Dimon famously raised the alarm when he said in 2014 that fintech challengers “all want to eat our lunch. Every single one of them is going to try”. This year, however, he sounded more sanguine: “It will be a challenge for anyone to be better, faster, cheaper than us.” In the meantime, PWC reported that 95 percent of the bankers that it surveyed “believe that part of their business is at risk of being lost to standalone FinTech companies” – although we do not know which are the concerned banks, how exactly they think their business will be affected, and what they plan to do about it.

Individual bankers’ opinions are one thing, but if a publicly held bank is sufficiently concerned about the fintech threat, it arguably has a responsibility to inform its shareholders about this. In particular, U.S. listed corporations discuss competition in their electronic annual reports (also known as Form 10-K) which are mandated by the Securities and Exchange Commission. So, what do U.S. bank holding companies – of which there are over four hundred – actually say in their 10-Ks about competition from fintech? This is the question my co-authors Sinziana Bunea at the University of Pennsylvania and Benjamin Kogan at FinTxt and I set out to answer. The results surprised us.

First, given that fintech has been increasingly in the news since the financial crisis, we were surprised that the earliest mention of it was only this year – specifically, on February 17th, early on in the filing season. The identity of the first fintech-mentioning bank was also unexpected: Huntington Bank, an important regional bank headquartered in Ohio, but hardly a household name. The subsequent dozen or so filers kept mum about fintech. But on February 23rd, JP Morgan itself acknowledged competition from fintech, and this appears to have opened the floodgates. Over the following week a full ten banks mentioned fintech in their filings for the first time. By the time the dust settled, a group of 14 U.S. banks had explicitly informed their investors that they regarded fintech as a potential threat.

The composition of this 14-bank group is also puzzling. It includes three of the nation’s top-ten  banks (JP Morgan, BNY Mellon, and PNC) together with nine regional players with assets in the billions (Beneficial, First Interstate, Horizon, Huntington, Iberiabank, SVB, UMB, Umpqua and Zions) and even two minnows with under a billion dollars in assets (Hamilton and CSB). What do these banks have in common other than being the first ones to officially register their concern over fintech? On the surface of it, not much. To investigate further, we looked at what these banks actually say about fintech in their filings.

In fact, six of the banks merely mention fintech in a list of different competitor types including other banks, brokerages, insurers, credit card companies, and so on. The list includes five types of competitors for CSB and Umpqua, six for Beneficial, seven for BNY Mellon and Zions, twelve for First Interstate, and an impressive eighteen for JP Morgan (the banks’ wide disagreement about the number of distinct competitive threats they are facing is interesting in itself).

Three of the banks go beyond a simple mention of fintech, although what they say about it is not particularly insightful. Thus, PNC, SVB and UMB go on to note that fintech competitors offer services such as payments and lending.

More intriguingly, two banks evoke less obvious aspects of a possible fintech threat. Horizon appears concerned about “the migration of bank personnel” away from traditional banks and toward their fintech competitors, while Iberiabank warns that competing with fintech on technology “would result in significant costs and increased risks of cyber security attacks”.

Of the remaining two banks, Hamilton Bancorp, by far the smallest and the most recent filer, is almost gushing in its praise of fintech: “They offer user friendly front-end, quick turnaround times for loans and other benefits. While Hamilton is evaluating FinTech companies with the possibility of developing relationships for efficiency in processing and/or as a source of loans and other business, we cannot limit the possibility that our customers or future prospects will work directly with a FinTech company instead.” We may never know what prompted such an outspoken assessment, but the frankness is certainly refreshing.

The prize for the depth of disclosure would have to go the pioneer. Huntington Bancorp, the first-ever U.S. depository institution to mention fintech in its annual report, also goes the furthest in discussing its competitive strategy in this regard: “Financial Technology, or FinTech, startups are emerging in key areas of banking.  In response, we are monitoring activity in marketplace lending along with businesses engaged in money transfer, investment advice, and money management tools. Our strategy involves assessing the marketplace, determining our near term plan, while developing a longer term approach to effectively service our existing customers and attract new customers. This includes evaluating which products we develop in-house, as well as evaluating partnership options where applicable.” It will be interesting to see what fruit this strategy will bear – and whether other banks will become as open about their fintech strategies as Huntington.

So, are these fourteen banks indeed particularly vulnerable to fintech competition, as taking the disclosures at face value would suggest, or are they simply more familiar with fintech than their non-fintech-mentioning peers? Looking at the banks’ actions, we find that at least five fall squarely into the latter category, led by the three giants. For example, JP Morgan has launched a residency program for fintech firms, invested in fintech firms such as Motif, and formed a partnership with OnDeck; BNY Mellon has set up fintech innovation centers; and PNC has invested in Digital Asset Holdings, a blockchain technology company. Of the smaller banks, SVB (which stands for “Silicon Valley Bank”) has made equity investments in Lending Club and Nvoicepay and hosts a fintech conference, while Umpqua is establishing a fintech subsidiary in Silicon Valley.

What about the other nine? It’s harder to tell. Which brings us to the question, why is such a disparate group of banks suddenly talking about fintech in their official filings? One possible answer is, in the words of 2016 Nobel Laureate Bob Dylan, “Because something is happening here, but you don’t know what it is”. It is plausible that, uncertain about what is happening and what to do, banks were taking cues from one another, a phenomenon economists colorfully refer to as “herding”. Under this interpretation, once Huntington went first (perhaps prompted by its acquisition of FirstMerit, a 171-year-old rival and neighbor, certainly a thought-provoking event), JP Morgan may not have wanted to be left far behind. Others would then have followed in reaction to JP Morgan’s filing, given that bank’s stature in the industry. However, once it became clear that only a few of the largest banks chose to mention fintech, the others’ proclivity to do so would have been greatly diminished. I stress, though, that the above is only a possible interpretation of what happened.

So what will happen in the next filing season? Will the number of fintech mentioning banks stay the same? Will it double or quadruple? Will the disclosures become more informative? Will some banks copy their wording from others? How will banks’ words correlate with their actions? And, most importantly, will banks’ fintech-related disclosures become a leading indicator for the bank-fintech dynamic?

Watch this space.

 

The study, titled “Banks vs. fintech: At last, it’s official” was published in the 44th volume of Journal of Financial Transformation, and can be downloaded here  

 

Like this article? Subscribe to our weekly newsletter to never miss out!

Image: Chris Brown, CC By 2.0

]]>
https://dataconomy.ru/2016/12/02/banks-telling-investors-fintech/feed/ 0