Monday, January 27, 2020

Data Update 2 for 2020: Retrospective on a Disruptive Decade

My data updates usually look at the data for the most recent year and what I learn from them, but 2020 also marks the end of a decade. In this post, I look back at markets over the period, a testing period for many active investors, and particularly so for value investors, who found that even as financial assets posted solid returns, what they thought were tried and true approaches to "beating the market" seemed to lose their power. In addition, trust in mean reversion, i.e., that things would go back to historic norms was shaken as interest rates remained low for much of the period and PE ratios rose above historical averages and continued to rise, rather than fall back. 

1. It was a great year, and a very good decade, for equities, and a very good year for bonds!
While investing should always be forward-looking, there is a benefit to pausing and looking backwards. If you had US stocks in your portfolio, 2019 was a very good year. The S&P 500 started the year at 2506.85 and ended the year at 3230.78, an increase of 28.88%, and with dividends added, the return for the year was 31.22%. To get a sense of how this year measures up against other good years, I compared it to the annual returns from 1927 to 2019 in this graph:
Download spreadsheet with annual market data
Over the 92 years that are in this historical assessment, 2019 ranked as the sixteenth best year and second only to 2013 (annual return of 32.15%) in this century. While stocks have garnered the bulk of the attention for having a good year, bonds were not slackers in the returns game. In 2019, the ten-year US treasury bond returned 9.64% and ten-year Baa corporate bonds weighing in with a 15.33% return. That may surprise some, given how low interest rates have been, but the bulk of these returns came from price appreciation, as the US treasury bond rate declined from 2.69% to 1.92%, and the corporate bonds also benefited from a decline in default spreads (the price of risk in the bond market) during the year. The year also capped off a decade of gains for stocks, with the S&P almost tripling from 1115.10 on January 1, 2010 to 3230.78 on January 1, 2020, and with dividends included and reinvested, the cumulated return for the decade is 252.96%. To put these returns in perspective, I have compared this cumulated return to the eight full decades that I have data for in the table below, in conjunction with the cumulated returns for treasury and corporate bonds over each decade:
Download spreadsheet with annual market data
While 2010-19 represented a bounce back for stocks from a dismal 2000-09 time period, with the 2008 crisis ravaging returns, it falls behind three other decades of even higher returns (1950-59, 1980-89 and 1990-1999). It was a middling decade for both treasury and corporate bonds, with cumulated returns running ahead of the three decades spanning 1940 to 1969 but falling behind the other decades, in terms of returns delivered. Treasury bills delivered their worst decade of returns, since the 1940s, with the cumulated return amounting to 5.25%. I don’t want to overanalyze historical data, but there are interesting nuggets of information in the data:
a. Historical Risk Premium: The US historical data has been used by many analysts in corporate finance and valuation as the basis for computing historical risk premiums and in the table below, I compute the risk premiums that investors would have earned in this market, investing in stocks as opposed to treasury bills and bonds, over different time periods, and with different averaging approaches:
Download spreadsheet with annual market data
If you go with the geometric average premium from 1927-2019 as your predictor for the equity risk premium in 2020, US stocks should earn about 4.83% more than US treasury bonds for the year:
Expected return on stocks in 2020 = T.Bond Rate + Historical ERP 
= 1.92% + 4.83% = 6.75%
Since a portion of this return will come from dividends, the expected price appreciation in stocks is the difference:
Expected price appreciation on stocks = Expected Return - Dividend yield 
= 6.75%- 1.82% = 4.93%
I am not a fan of historical premiums, not only because they represent almost an almost slavish faith in mean reversion but also because they are noisy; the standard errors in the historical premiums are highlighted in red and you can see that even with 92 years of data, the standard error in the risk premium is 2.20% and that with 10 or 20 years of data, the risk premium estimate is drowned out by estimation error.
b. Asset Allocation: The fact that stocks have beaten treasury and corporate bonds by wide margins over the entire history is often the sales pitch used to push investors to allocate more of their savings to stocks, with the argument being that stocks always win in the long term. The data should yield cautionary notes:
  • First, in three decades out of the nine in the table, stocks under-performed treasury bonds and treasury bills, and if your response is that ten years is not a long enough time period, you may want to check the actuarial tables. 
  • Second, there is a selection bias in our use of the US markets for computing the historical premium. Looking across the globe, the US was one of the most successful equity markets of the last century and using it may be skewing our results upwards. Put bluntly, if you had invested in the Nikkei at the height of its climb in the 1980s, you would still be struggling to get back the money you lost, when the Japanese markets collapsed.
c. Market Timing: It is human nature to try to time markets, and some investors make it the central focus of their investment philosophies. I will not try to litigate the good sense of doing so in this post, but the historical return data gives us a sense of both the upside and the downside of doing so. In terms of pluses, an investor who was able to avoid the doomed decades (when stocks earned less than T.Bills and T.Bonds) would be comfortably ahead of an investor who did not, if he or she stayed fully invested in the remaining decades. In terms of minuses, if the market timing investor failed to stay invested in stocks in the good decades, the opportunity costs would quickly overwhelm the benefits. Between 2010 and 2019, there were many investors who believed that a correction was around the corner, driven by their perception that interest rates were being kept artificially low by central banks and that they would revert to historic norms quickly. When that reversion did not occur, these investors paid a hefty price in returns foregone. All of the historical returns that I have reported in this section are nominal, and to the extent that you are interested in real returns, you may want to download the historical data from my website and check out the results. (Hint: Not much changes)

2. A Low Interest Rate Decade
If there was a defining characteristic for the decade, it was that interest rates, both in the US and globally, dropped to levels not seen in decades. You can see this in the path of the US 10-year treasury bond rate in the graph below:
Download historical treasury rates, by year
Since the drop in rates occurred after the 2008 crisis, and in the aftermath of concerted actions by central banks to bolster weak economies, it has become conventional wisdom that it is central banks that have kept rates artificially low, and that the ending of quantitative easing would cause rates to revert back to historical averages. As many of you who have been reading my posts know, I don't believe that central banks have the power to keep long term market-set rates low, if the fundamentals don't support low rates. In fact, one of my favorite graphs is one where I compare the 10-year treasury bond rate each year to the sum of the inflation rate and real GDP growth rate that year (intrinsic riskfree rate):
Download historical treasury rates, by year
As you can see, the main reason why rates have dropped in the US and Europe has been fundamental. As inflation has declined (and become deflation in some parts of the world) and real GDP growth has been anemic, intrinsic and actual risk free rates have dropped. To the extent that the difference between the two is a measure of central banking actions, it is true that the Fed’s actions kept actual rates lower than intrinsic rates more in the last decade than in prior years, but it is also true that even in the absence of central banking intervention, rates would not have reverted back to historical norms. 

3. It was a tech decade, and FAANG stocks stole the show!
While it was a good decade for stocks,  the gains varied across sectors. Using the S&P 500 again as the indicator, you can see the shift in value over the decade by looking at how the different sectors evolved over the decade, as a percent of the S&P 500:
The most striking shift is in the energy sector, which dropped from 11.51% of the index to 4.60%, in market capitalization terms. Some of this drop is clearly due to the decline in oil prices during the decade, but some of it can be attributed to a general loss of faith in the future of fossil fuel and conventional energy companies. The biggest sector through the entire decade was technology but its increase in percentage terms seems modest at first sight, rising from 19.76% in 2009 to 21.97% in 2019, but that is because two of the biggest names in the sector, Google and Facebook, were moved to the communication services sector; if they had been left in technology, its share of the index would have risen to more than 30%. In fact, five companies (Facebook, Alphabet, Apple, Netflix and Google), representing the FAANG stocks, had a very good decade, with their collective market capitalization increasing by $3.4 trillion over the ten years:

Put in perspective, the FAANG stocks accounted for 22% of the increase in market capitalization of the S&P 500, and any portfolio that did not include any of these stocks for the entire decade would have had a tough time keeping up with the market, let alone beating it. (This is an approximation, since not all five FAANG stocks were part of the S&P 500 for the entire decade, with Facebook entering after its IPO in 2012 and Netflix being added to the index in 2014).

4. Mean Reversion or Structural Shift
One of the perils of being in a market like the US, where rich historical data is available and easily accessible is that analysts and academics have pored over the data and not surprisingly found patterns that have very quickly become part of investment lore. Thus, we have been told that value beats growth, at least over long periods, and that small cap stocks earn a premium, and have converted these findings into investing strategies and valuation practices. While it is dangerous to use a decade’s results to abandon a long history, the last decade offered sobering counters to old investing nostrums.

a. Value versus Growth
The basis for the belief that value beats growth is both intuitive and empirical. The intuitive argument is that value stocks are priced cheaper and hence need to do less to beat expectations and the empirical argument is that stocks that are classified as value stocks, defined as low price to book and low price to book stocks, have historically done better than growth stocks, defined as those trading at high price to book and high price earnings ratios. Looking at the annual returns on the lowest and highest PBV stocks in the United States, going back to 1927:
Raw Data from Ken French
The lowest price to book stocks have historically earned 5.22% more than the highest price to book stocks, if you look at 1927-2019. Broken down by decades, though, you can see that the assumption that value beats growth is not as easily justified:
Raw Data from Ken French
While there are some, especially in the old-time value crowd, that view the last decade as an aberration, the slide in the value premium has been occurring over a much longer period, suggesting that there are fundamental factors at play that are eating away at the premium. If you are a believer in value, as I am, there is a consolation prize here. Assuming that low PE stocks and low PBV stocks are good value is the laziest form of value investing, and it is perhaps not surprising that in a world where ETFs and index funds can be created to take advantage of these screens, there is no payoff to lazy value investing. I believe that good value investing requires creativity and out-of-the-box thinking, as well as a willingness to live with uncertainty, and even then, the payoff 

b. The Elusive Small Cap Premium
Another accepted part of empirical wisdom about stocks not only in the US, but also globally, is that small cap stocks deliver higher returns, after adjusting for risk using conventional risk and return models, than large cap stocks. 
Raw Data from Ken French
Looking at the data from 1927 to 2019, it looks conclusively like small market cap stocks have earned substantially higher returns than larger cap stocks; relative to the overall market, small cap stocks have delivered about 4-4.5% higher returns, and conventional adjustments for risk don't dent this number significantly. Not only has this led some to put their faith in small cap investing but it has also led analysts to add a small cap premium to costs of equity, when valuing small companies. I have not only never used a small cap premium, when valuing companies, but I am skeptical about its existence, and wrote a post on why a few years ago. Again, updating the data by decades, here is what I see:
Raw Data from Ken French
As with the value premium, the size premium had a rough decade between 2010 and 2019, dropping close to zero, on a value weighted basis, and turning significantly negative, when returns are computed on a equally weighted basis. Again, the trend is longer term, as there has been little or no evidence of a small cap premium since 1980, in contrast to the dramatic premiums in prior decades. If you are investing in small cap stocks, expecting a premium, you will be disappointed, and if you are still adding small cap premiums to your discount rates, when valuing companies, you are about four decades behind the times.

5. New buzzwords were born
Every decade has its buzzwords, words that not only become the focus for companies but are also money makers for consultants, and the last decade was no exception. At the risk of being accused of missing a few, there were two that stood out to me. The first was big data, driven partly by more extensive collection of information, especially online, and partly by tools that allowed this data to be accessed and analyzed. The other was crowd wisdom, where expert opinions were replaced by crowd judgments on a wide range of applications, from restaurant reviews to new (crypto) currencies.

a. Big Data
Earlier in this post, I looked at the surge in value of the FAANG stocks, and how they contributed to shaping the market over the last decade. One common element that all five companies shared was that they were not only reaching tens of millions of users, but that they were also collecting information on these users, and then using that information to improve existing products/services and add new ones. Other companies, seeking to emulate their success, tried their hand at “big data”, and it became a calling card for start-ups and young firms during the decade. While I agree that Netflix and Amazon, in particular, have turned big data into a weapon against competition, and Facebook’s entire advertising business is built on using personal data to focus advertising, I personally believe that like all buzz words, big data has been over sold. In particular, I noted, in a post from 2018 ,that for big data to create value,
  1. The data has to be exclusive: For data to be valuable, there has to be some exclusivity. Put simply, if everyone has it, no one has an advantage. Thus, the fact that you, as a business, can trace my location has little value when two dozen other applications and services on my iPhone are doing exactly the same thing. 
  2. The data has to be actionable: For value conversion to occur, the data that has been collected has to be usable in modifying and adapting the products and services you offer as a business. 
Using these two-part test, you can see why Amazon and Netflix are standouts when it comes to big data, since the data they collect is exclusive (Netflix on your viewing habits/tastes and Amazon on your retail behavior) and is then used to tailor their offerings (Netflix with its original content investments and offerings and Amazon with its product nudging). Using the same two-part test, you can also see why the claims of big data payoffs at MoviePass and Bird Scooters makers never made sense.

b. Crowd Wisdom
One consequence of the 2008 crisis was a loss in faith in both institutional authorities (central banks, governments, regulators) but also in experts, most of whom had been hopelessly wrong in the lead up to the crisis. It is therefore not surprising that you saw a move towards trusting crowds on answers to big questions right after the crisis. It is no coincidence that Satoshi Nakamoto (whoever he might be) posted the paper laying out the architecture of Bitcoin in November 2008, a proposal for a digital currency without a central bank or regulatory overlay, where transactions would be crowd-checked (by miners). While Bitcoin has been more successful as a speculative game than as a currency during the last decade, the block chains that it introduced have now found their way into a much wider range of businesses, threatening to replace institutional oversight (from banks, stock exchanges and other established entities) with cheaper alternatives. The crowd concept has expanded into almost every aspect of our lives, with Yelp ratings replacing restaurant reviewers in our choices of where to eat, Rotten Tomatoes supplanting movie critics in deciding what to watch and betting markets replacing polls in predicting election outcomes. I share the distrust of experts that many others have, but I also wary of crowd wisdom. After all, financial markets have been laboratories for observing how crowds behave for centuries, and we have found that while crowds are often much better at gauging the right answers than market gurus and experts, they are also prone to herding and collective bad choices. For those who have become too trusting of crowds, my recommendation is that they read “The Madness of Crowds”, an old manuscript that is still timely.

The decade to come
It has been said that those who forget the past are destined to relive it, and that is one reason why we pore over historical track records, hoping to get insight for the future. But it has also been said that army generals who prepare too intensely to fight the last war will lose the next one, suggesting that reading too much into history can be dangerous. To me the biggest lesson of the last decade is to keep an open mind and to not take conventional wisdom as a given. I don’t know what the next decade will bring us, but I can guarantee you that it will not look like the last one or any of the prior ones, So, strap on your seat belts and get ready! It’s going to be a wild ride!

YouTube Video


Data Links

  1. Stocks, Bonds and Bills: 1928-2019
  2. Intrinsic and Actual Risk free Rates: 1954-2019
  3. Ken French Data on Value and Size Effects
-->

Monday, January 13, 2020

Data Update 1 for 2020: Setting the table

Starting in the early 1990s, I have spent the first week or two of every new year playing my version of Moneyball, downloading raw market and accounting data on publicly traded companies and using that data to compute operating, pricing and risk metrics for them. This year, I got a later start than usual on January 6, but as the week draws to a close, the results of my data exploration are posted on my website and will be the basis for a series of posts here over the next six weeks. As you look at the data, you will find that the choices I have made on how to classify companies and compute metrics affect my findings, and I will use this post to cast some light on those choices.

The Data
Raw Data: We live in an age when accessing raw data is easy, albeit not always cheap, and the tools to analyze that data are also widely available. My raw data is drawn from a variety of sources, ranging from S&P Capital IQ to Bloomberg to the Federal Reserve, and there are two rules that I try to follow. The first is to be careful about attributing sources for the raw data, and the second is to not undercut my raw data providers by replicating their data on my site, if they have commercial interests. 
Data Analysis: Broadly speaking, I would categorize my data updates into three groups. The first is macro data, where my ambitions tend to be modest, and the only numbers that I update are numbers that I need and use in my valuation and corporate financial analysis. The second is business data, where I consolidate the company-level data into industry groupings, and report statistics on how companies invest, finance their operations and return cash (dividends and buybacks). The third are my data archives, where you can look at trend lines in the statistics by accessing my statistics from prior years. 

A. Macro Data
I am not a market timer or a macro economist, and my interests in macro data are therefore limited to numbers that I cannot easily look up, or access, on a public database. Thus, there is no point in my reporting exchange rates between major currencies, when you have FRED, the Federal Reserve site , that I cannot praise more highly for its reach and its accessibility. I do report and update the following:
  • Risk free rates in currencies: The way in which currencies are dealt with in valuation and corporate finance leaves us exposed to multiple problems, and I have written about both why risk free rates vary across currencies and why government bond rates are not always risk free. At the start of every year, I update my currency risk free rates, starting with the government bond rates, and then netting out default spreads and report them here. As risk free rates in developed market currencies hit new lows, and central banks are blamed for the phenomenon, I also update an intrinsic measure of the US dollar risk free rate, obtained by adding the inflation rate to real GDP growth each year, and report the time series in this dataset.
  • Equity Risk Premiums: The equity risk premium is the price of risk in equity markets and plays a key role in both corporate finance and valuation. The conventional approach to estimating this risk premium is to look at history, and to compare the returns that you would have earned investing in stocks, as opposed to investing in risk free investments. I update the historical risk premium for US stocks, by bringing in 2019 returns on stocks, treasury bonds and treasury bills in this dataset; my updated geometric average premium for stocks over US treasuries. I don't like the approach, both because it is backward looking and because the risk premium estimates are noisy, and have argued for a forward looking or implied ERP. I estimate the implied ERP to be 5.20% at the start of 2020 and report the year-end estimates of the premium going back to 1960 in this dataset. 
  • Corporate Default Spreads: Just as equity risk premiums measure the price of risk in equity markets, default spreads measure the price of risk in the debt markets. I break down bonds into bond rating classes (S&P and Moody's) and report my estimates of default spreads at the start of 2020 in this spreadsheet (and it includes a way of estimating a bond rating for a firm that does not have one).
  • Corporate Tax Rates: Ultimately, companies and investors count on after-tax income, though companies are adept at keeping taxes paid low. While I will report the effective tax rates that companies actually pay in my corporate data, I am grateful to KPMG for going through tax codes in different countries and compiling corporate tax rates, which I reproduce in this dataset.
  • Country Risk Premiums: As companies expand their operations beyond domestic markets, we are faced with the challenge of bringing in the risk of foreign markets into our corporate financial analyses and valuation. I have spent much of the last 25 years trying to come up with better ways of estimating risk premiums for countries, and I describe the process I use in excruciating detail in this paper. At the start of 2020, I use my approach, flaws and all, to estimate equity risk premiums for 170 countries and report them in this dataset.
With macro data, it is generally good practice in both corporate finance and valuation to bring in the numbers as they are today, rather than have a strong directional view. So, uncomfortable though it may make you, you should be using today's risk free rates and risk premiums, rather than normalized values, when valuing companies or making investment assessments.

B. Micro Data
The sample: All data analysis is biased and the bias starts with the sampling approach used to arrive at the data set. My data sample includes all publicly traded companies, listed anywhere in the world, and the only criteria that I impose is that they have a market capitalization number available as of December 31, 2019. The resulting sample of 44,394 firms includes firms from 150 countries, some of which have very illiquid markets and questionable disclosure practices. Rather than remove these firms from my sample, which creates its own biases, I will keep them in my sample and deal with the consequences when I compute my statistics.

While this is a comprehensive sample, it is still biased because it includes just publicly listed companies. There are tens of thousands of private businesses that are part of the competitive landscape that are not included here, and the reason is pragmatic: most of these companies are not required to make public disclosures and there are few reliable databases that include data on these firms. 
The Industry Groupings: While I do have a (very large) spreadsheet that has the data at the company level, I am afraid that my raw data providers do not allow me to share that data, even though it is entirely comprised of numbers that I estimate. I consolidate that data into 94 industry groupings, which are loosely based on the industry groupings I created from Value Line in the 1990s when I first started creating my datasets. To see my industry grouping and what companies fall into each one, try this dataset. As you look at individual companies, there are two challenges that I face. First, there are companies that are in many businesses and I classify these companies into the industry groups from which they derive the most revenues. Second, some companies are shape shifters when it comes to industry grouping, and it is unclear which grouping they belong to; for a few high profile examples, consider Apple and Amazon. There is little that I can do about either problem, but consider yourselves forewarned.
The statistics: My interests lie in corporate finance and valuation and selfishly, I report the statistics that matter to me in that pursuit. Luckily, as I described it in my post a few weeks ago, corporate finance is the ultimate big picture class and the statistics cover the spectrum, and I think the best way to organize them is based upon broad corporate finance principles:
If you are interested, you will find more in-depth descriptions of how I compute the statistics that I report both in the datasets themselves as well as in this glossary.
The timing: I use a mix of market and accounting data and that creates a timing problem, since the accounting data is updated at the end of each quarter and the market data is updated continuously. Using the logic that I should be accessing the most updated data for every item, my January 1, 2020, updated has market data (for share prices, interest rates etc) as of December 31, 2019 and the accounting data as of the most recent financial statement (usually September 30, 2019 for most companies). I don't view this an inconsistent but a reflection of the reality that investors face.

C. Archived Data
When I first started compiling my datasets, I did not expect them to be widely used, and certainly did not believe that they would be referenced over time. As I starting getting requests for datasets from earlier years, I decided that it would save both me and you a great deal of time to create an archive of past datasets. As you look at these archives, you will notice that not all datasets go back in time to the 1990s, reflecting first the expansion of my analysis from just US companies to global companies about 15 years ago and second the adding on of variables that I either did not or could not report in earlier years.

The Rationale
If you are wondering why I collect and analyze the data, let me make a confession, at the risk of sounding like a geek. I enjoy working with the data and more importantly, the data analysis is a gift that keeps on giving for the rest of the year, as I value companies and do corporate financial analysis.
  1. It gives me perspective: In a world where we suffer from data overload, the week that I spend looking at the numbers gives me perspective not only on what comprises normal in corporate financial behavior, but also on the differences across sectors and geographies. 
  2. Possible, Plausible and Probable: I have long argued that the valuation of a company always starts with a story but that a critical part of the process of converting narrative to value is checking the story for possibility, plausibility and probability. Having the global data aggregated and analyzed can help significantly in making this assessment, since you can see the cross section of revenues and profit margins of companies in the business and see if your assessments are out of line, and if so, whether you have a justification. 
  3. Rules of thumb: In spite of all of the data that we now have available, investors and companies seem to still rely on rules of thumb devised in a different time and market. Thus, we are told that companies that trade at less than book value, or six times EBITDA, are cheap, and that the target or right debt ratio for a manufacturing company is 40%. Using the global data, we can back up or dispel these rules of thumb and perhaps replace them with more dynamic and meaningful decision rules.
  4. Fact-based opinions: Many market prognosticators and economists seem to have no qualms about making up stuff about investor and corporate behavior and stating them as facts. Thus, it has become conventional wisdom that US companies are paying less in taxes that companies operating elsewhere in the globe, and that they have borrowed immense amounts of cash over the last decade to buy back stock. Those "facts" are now driving political debate and may well lead to change in policy, but these are more opinions than facts, and the data can be arbiter.
If you are wondering why I am sharing the data, let's get real. Nothing that I am doing is unique, and I have no secret data stashes. In short, anyone with access to data (and there are literally tens of thousands who do) can do the same analysis. I lose nothing by sharing, and I get immense karmic payoffs. So, please use whatever data you want, and in whatever context, and I hope that it saves you time and helps you in your decision making and analysis. 

The Caveats
The last decade has seen big data and crowd wisdom sold as the answers to all of our problems, and as I listen to the sales pitches for both, I would offer a few cautionary notes, born out of spending much of my life time working with data:
  1. Data is not objective: The notion that using data makes you objective is nonsense. In fact, the most egregious biases are data-backed, as people with agendas pick and choose the data that confirms their priors. Just as an example, take a look at the data that I have in what US companies paid in taxes in 2019 in this dataset. I have reported a variety of tax rates, not with the intent to confuse, but to note how the numbers change, depending on how you compute them.  If you believe, like some do, that US companies are shirking their tax obligations, you can point to average tax rate of 7.32% that I report for all US companies, and note that this is well below the federal corporate tax rate of 21%. However, someone on the other side of this debate can point to the 19.01% average tax rate across only money making companies (since only profits get taxed) as evidence that companies are paying their taxes. 
  2. Crowds are not always wise: One of the strongest forces in corporate finance is me-tooism, where companies decide how to invest, how much to borrow and what to pay in dividends by looking at what their peers do. In my datasets, I offer them guidance in this process, by reporting debt ratios and dividend payout ratios for sectors, as well as regional breakdowns. The implicit assumption is that what other companies do, on average, must be sensible, but that assumption is not always true. This warning is particularly relevant when you look at the pricing metrics (PE, EV to EBITDA etc.) that I report, by sector and by region. The market may be right, on average, but it can also over price or under price a sector, at times.
I respect data, but I don't revere it. I don't believe that just having data will give me an advantage over other investors or make me a better investor, but harnessing that data with intuition and logic may give me a leg up (or at least I hope it does).

YouTube Video


Links