News of the Week (January 4 - 8, 2025)

In case you missed it, the majority of this week’s content was already sent:

Table of Contents

1. My Investing Rules 

As part of the ongoing, inter-earnings season investing concepts theme, I wanted to discuss some personal stock market rules that I have imposed on myself.

Options:

First and foremost, I do not speculate with naked options. I very sparingly use them as a hedging tool or what I call “disaster insurance” surrounding earnings reports, but that is rare. Why do I avoid this type of investing vehicle? We do not personally control the outcomes of these companies. We aren’t insiders with access to real-time information on how things are progressing. We rely on backward-looking earnings reports, and uncertain guidance to gauge a company’s health in increments of three months, with some disclosures sprinkled in here and there. This isn’t me saying public stocks are dangerous. I think the U.S. stock market is the most powerful and reliable wealth builder ever created. But I think pursuit of that wealth creation must be slow, calculated and responsible. Options are not that.

Being right about a stock’s direction is hard enough. Forcing ourselves to need to be right about that direction and the timing of it is something I do not find worth pursuing. It’s speculating on how exogenous factors that affect day-to-day stock prices and sentiment will play out. You will constantly see people post pictures of their options trades on social media and how they made several hundred percent in a day. We have to know that this sharing is immensely cherry-picked. These folks are not showing you the dozen other options trades that went to zero and they’re probably not showing you how their portfolios are performing either. Not a coincidence. There are successful options traders, but I find boring old equity to be far easier and less stressful

Margin:

Next, I never use margin. Debt is not for speculative assets where we don’t enjoy the benefit of asymmetric information. Debt is not for things that violently fluctuate in price. Debt is to be carefully and sparingly used to put roofs over our heads or to be able to drive to work. Unless carefully, obsessively monitored, margin is asking to blow up an account. It is inviting the magnification of losses (and gains) and accepting the risk of margin calls and required sales. It is begging to force ourselves into a corner and liquidate winning positions that we’d otherwise hold for the long term. And? It’s also expensive. I’d rather focus on fundamental research. Markets are stressful enough without worrying about margin calls and upside is more than compelling enough without it too. I find most options and margin approaches to resemble gambling more than investing.

Shorting:

Thirdly, I don’t short. My favorite part of public investing is the profit potential of each position. Upside is uncapped while downside is inherently finite. When you short, you are intentionally flipping that equation and willfully uncapping your potential downside. I see no reason to do that. If I think a company is ridiculously overvalued or toxic, I’ll just find a different investment case that I’d like to buy into. There are always plenty of opportunities out there on the safer long end and it’s just harder to make money going the other way. Expensive can get more expensive, crazy can get crazier, and “markets can stay irrational for longer than you can stay solvent.” A good reminder from King Buffett.

Maximum Allocation:

Over the last several years, I have toyed with the idea of “maximum investment allocation.” I do think diversification matters a lot, as, again, we do not have direct control over the outcomes of our companies and some failure is inevitable. Nobody comes close to batting 1.000, and putting all of our eggs in one basket means one measly ground out to third is the end of our portfolio.

As I’ve grown more experienced and confident in my ability to assess companies I’ve gradually raised the allocation ceiling. As of right now, 8% of total net worth is the current cap. This is based on capital outlay, rather than what the position is worth, as I’m more than happy to let winners like Amazon and SoFi grow larger on their own. There’s one large caveat here. The 8% is for profitable companies. The cap is just 4% with companies that have yet to inflect to profitability (just Lemonade in my portfolio). That lack of positive margins make firms more speculative and more volatile, considering they haven’t proven they can accomplish the only thing that matters for long term returns… profitable growth.

I am more flexible with this rule than the other three. While this hasn’t happened yet, I do think I’d entertain breaching that 8% clip if a world-class blue-chip like Amazon sharply sold-off for reasons I found unfair. I don’t think I’d do this for the vast majority of companies.

One Strike and You’re Out:

I’m patient with companies. One blunder is usually never enough to force me out. The most iconic firms in the world have all stubbed their toes on the way to becoming historically successful investments. With that said, there is a red flag that forces me out of positions immediately: Accounting fraud risk and general dishonesty. Because we are not insiders and because we don’t live in the day-to-day operations of a company, we rely on trustworthy leadership. This is to be earned and never unconditionally enjoyed. My assumption will be to trust teams as I believe in innocence until proven guilty. But? If there’s the slightest semblance of shadiness… If it becomes even a little likely that I cannot trust the words coming out of a team’s mouth… if I begin to question the moral integrity of those leaders… if accounting gimmicks surface… I am out. No questions asked. Done. The stock may go up and up and that’s fine, someone else can make that money.  I can’t sleep while investing in companies with management I don’t trust.

Letting shady and charismatic leaders seduce us into thinking their words are gospel is a clear path to losses. I made that mistake in 2022, believing an Upstart leadership team that promised they wouldn’t use their balance sheet to hold more loans. They also claimed immunity from macro headwinds when no lender is ever immune. I knew better, yet I listened to them anyway, as I thought there was no way they’d be so publicly wrong. What was my lesson? Turning several hundred % gains into a position that slightly underperformed benchmarks through the holding period. I gave up significant alpha, which leads me to my final rule.

Relax:

Accept being a human. We all make mistakes. We are not perfect. These rules above are guidelines designed specifically with that idea in mind and crafted in the spirit of damage control when we do mess up. Our blunders are inevitable and it’s important to structure our approach in a way that makes this entirely ok. If you can’t endure a market downturn, you’ll never enjoy the gradual, long-term wealth compounding opportunities that markets provide. We need to ensure our next mistake and the one after that don’t lead to our financial ruin. And we need to make sure we aren't taking too much speculative risk or acting irresponsibly amid that objective.

As long as that’s the case, we can’t beat ourselves up for being wrong about a name. We can’t get internally angry the next time one of our opinions turns out to be wrong. This isn’t even a matter of “well I’m not Warren Buffett.” Buffett isn’t perfect either. He sold Snowflake at the bottom and has had other failed investments to his name. Just like me and just like you. That’s why we manage a portfolio of quality, diversified companies. And that’s why we must smile, contemplate, grow and move on with our heads held high next time we misstep. If you’re behaving responsibly and doing things the right way, all a mistake represents is an opportunity to learn. I’ll be learning until I die.

Conclusion:

This approach has served me quite well over the last few years. It’s how I can comfortably invest in disruptors and high-growth firms while also remaining calm and sleeping well at night. This is how I allow myself to participate in the generational growth stories and high-fliers without feeling anxious, irresponsible or overwhelmed. It’s how I safely take more risk without sacrificing compelling risk/reward. But? It’s just my approach.  You develop the guidelines that work for you.

Everyone does things differently. There will never be a one-size-fits-all strategy for stock picking, as it’s a byproduct of our circumstances and personalities. I simply share this information to offer inspiration as you form your own approach, and I invite you to use or ignore whatever you want (as always).

2. Meta (META) – TikTok Ban & eBay

It is looking increasingly likely that TikTok will be shut down in the USA on January 19th. There are several interested bidders in the company, but it’s unclear if a deal will get done or if TikTok is even willing to sell to a U.S.-based buyer. We will have to see how this unfolds, especially considering the new administration seems to be against a ban.

If a ban goes through, that would clearly deliver a large engagement boost to Meta’s apps and everyone else competing for screen time. Still, there is a small risk with that outcome as well. Meta enjoys a lot of revenue from Chinese-based sellers, and China could easily retaliate by barring those sellers from accessing Meta’s apps. While possible, I don’t think that’s super likely. The Chinese government generally is eager to let its businesses sell to other countries and bring value and money into the country. It clamps down on restrictions when there are foreign companies selling to Chinese buyers and pulling value out of the nation. Meta isn’t even allowed to operate in China at this point and this form of retaliation would be China cutting off its nose to spite its face.

This is not the reason to own Meta. Its fortress ecosystem, AI leadership, world-class team, elite financials and unmatched network effects are the reasons to own this one like I do. Still, we will take tailwinds whenever we can get them and this would surely lead to a bump in its results.

In other Meta news, it will begin including eBay (EBAY) listings on its marketplace. This should improve assortment and monetization.

3. Disney (DIS) – Various News

a. Bullish Analysts

Disney got an upgrade to overweight by Redburn ($147 price target) based on optimism surrounding its streaming business. Like everyone else, it sees this turning into a real profit growth driver in the coming years, with the margin inflection now in the rear view mirror. It now thinks the streaming business is a larger tailwind than the linear business is a headwind. Maybe they read the newsletter. Wedbush thinks the Hulu Live TV + Fubo merger will “create a dominant player” to more effectively compete with the scale of SlingTV and YouTube.

b. Advertising Monthly Active Users (MAUs)

Disney disclosed a new metric this week. It has 157 million ad-supported MAUs across its streaming platform, with 112 million in the USA. This compares to 70 million for Netflix as of two months ago, but the methodology between the two companies is different. We don’t even know how Netflix calculates their own number. Disney counts a customer as an MAU if they have watched more than 10 seconds of ad-supported content over the last month. It multiplies this number by 2.6x, as that’s the average number of users per account.

There’s really nothing to compare this to and no consensus estimates to understand how this stacks up vs. expectations. There were no expectations considering this disclosure was a surprise. I’m hoping this inspires competitors to embrace a similar metric, so we have more of an apples-to-apples comparison across the budding field.

c. Venu

While the proposed Venu sports venture (Fox + Warner Bros. + Disney sports rights in a bundle) resolved legal battles with Fubo this week, the DirectTV Antitrust claims are not going away. Based on all recent developments, Venu is no longer happening. Disney was asked last quarter about this risk and didn’t think it would be material to results.

4. CrowdStrike (CRWD) – Public Sector Win

CrowdStrike received FedRAMP Moderate authorization for three additional modules this week: Next-Gen SIEM, Falcon for IT and Data Protection. Notably, this is not FedRAMP High authorization, which would allow it to cater to the most sensitive and secure agencies, but this is still quite meaningful. The 3 modules join FedRAMP authorization for several of its products across endpoint and cloud security. Why does this matter? FedRAMP Moderate clearance is required for various agencies to actually use and deploy software.

In a world obsessed with vendor consolidation and using overarching product platforms, this allows CrowdStrike to be a much more valuable partner for public sector clients. It will surely improve its ability to deliver more compelling product packages to these customers, and also offers more evidence of the company gracefully moving beyond the ugly July outage. Clearly they have not burned the trust of the Federal government. 

5. Nvidia (NVDA) – Jensen Huang Consumer Electronics Show (CES) 2025 Keynote & an Investor Q&A

a. Jensen Huang Keynote

RTX Blackwell:

Jensen showed off the new RTX Blackwell Family and GeForce RTX 50 series (for computer graphics), which uses Blackwell architecture. It boasts 3x the AI compute capacity of the previous generation, utilizes Micron’s G7 memory product and fully “mixes AI and computer graphics workloads.” The infusion of AI into gaming and graphics design creates some exciting new capabilities.

GeForce RTX 50 paves the way for deep neural networks to run alongside these GPUs. That makes things like light simulation (ray tracing) rational for every individual pixel that goes into frames. It vastly accelerates image rendering by using AI to do most of the work with improved performance and graphics quality. Specifically, Nvidia can use GenAI to turn 2 million pixels into 33 million to generate hyper-realistic frames in an ultra-efficient manner. This relies on well-trained models, which is why Blackwell and Nvidia’s supercomputer infrastructure are such important pieces of this new product. As part of this announcement, Nvidia showed off new tools like texture compression, to ensure rendered images are as pristinely beautiful as possible.

The Blackwell-enabled RTX 50 series features a few different products with various levels of compute capacity. It doubles the performance of its once-industry-leading predecessor and offers products with the same amount of power for $549 vs. $1,599 previously. 

Blackwell (part 2):

As you’re reading through this, consider that a token is a unit measurement for GenAI model queries and responses. It represents units of text, images and video frames or soon to be “action tokens” in the world of physical AI (more later). If prompts are complex, many tokens will be chained to form a full response. GenAI models use token inputs (from us) to generate token outputs (from them).

Huang walked us through the various existing and emerging scaling laws for high performance compute. The pre-training law is broadly understood at this point. The more data you infuse into scalable models, the better it will become. That makes sense. Just like test scores go up when we read and contemplate relevant study materials, model proficiency scores rise when the same things play out.

There are now 2 more laws. First is “post training.” Jensen compared this to a mentor that reminds us of our learnings and gives us new ideas and inspiration after initial training. For example, if I know how to swing a golf club, but really need to practice more to establish muscle memory and perfect the mechanics, then doing so will make me a better golfer. Something called synthetic data generation helps here. Using the same example, this essentially takes my swing data and tweaks it with near-endless simulation variations to know exactly how to tell me to improve. It can essentially use small chunks of data to generate all possible outcomes and learn from them. This shrinks model training time and data needs considerably. Maybe I’ll even break 80.

Finally, test-time scaling is model reasoning. As Jensen explained, instead of adding or tweaking model parameters, this is a model deciding how hard and long it needs to work to create a token for a specific query. Some output tokens are easy to generate and some are more intricate. This makes sure an AI transformer isn’t working harder than it needs to… like the Bills sitting their starters in week 18 because they already worked hard enough to lock up another playoff spot. Conversely, my beloved Detroit Lions needed to secure a one seed last week. So? We had an injured linebacker playing with a cast on his hand as we maximized effort for that specific task.

All of these rules rely on world-class compute power and efficiency that is constantly getting better. So? It relied on Hopper yesterday, Blackwell today and will rely on Rubin in the future. None of this works without perpetually scaling high-performance compute made possible by Nvidia’s hardware. Without it, the crippling issue of rampant “compute inflation” would make the emerging and future GenAI apps entirely irrational to build.

  • As previously announced, Blackwell is in full production and every single cloud service provider (CSP) is now using it in their data centers. There are 200 different SKUs or configurations, with various CPU infrastructure and networking equipment usage, to cater to all data center needs.

The Grace Blackwell 200 (GB 200) contains 72 total GPUs, while the NVLink 72 can combine 200 GB 200 chips to establish connection between over 14,000 total GPUs. The larger the compute cluster, the better the training, the more scalable the data processing and the more exciting potential GenAI applications become. Specifically, the GB200 NVLink 72 allows companies to scale models to 3x larger at the same cost vs. the predecessor. It has 1.2 petabytes of memory bandwidth per second to “basically process the entire internet.”

Jensen pretending to be Captain America with a GB 200 unit. Great sense of humor. Not just a visionary!

Agentic AI:

Agentic AI, to Huang, is a “perfect example of test-time scaling.” Rather than interacting with a single model that can recognize speech or generate simple marketing materials, Agentic AI leans on a “system of models” with a wide range of broad, complementary functions. “Some are to understand customer intent, some retrieve info from the web or storage, some generate charts” and all communicate and support one another. These models think together, reason, test ideas, determine needed resources per optimal query and also strategize the best way to complete a given task. All behind the scenes. You aren’t telling it how to do something. Instead, you’re telling it what to do and letting it figure out how to achieve that.

Three key products support Nvidia’s Agentic AI vision. All of these are review. Nvidia Inference Microservices (NIMs) help its software and algorithm suite facilitate high-value use cases from its hardware. It packages everything a company needs to embrace enterprise AI – from chips to networking equipment to Cuda software – to help them take the headache out of participating in this technological revolution. It makes AI agent deployment much easier.

Next, NeMo offers guided step-functions to build, train and integrate GenAI models. Huang thinks of this as “digital employee onboarding, training and evaluating.” It makes safely and quickly bringing autonomous agents into a company’s secure ecosystem easy, with a full library of algorithms to train these agents with a company’s own data. This means the model learns the lingo, workflows and practices of a specific firm, rather than being trained on more general ideas. Huang thinks NeMo will make the “IT department the new AI Agent HR department of the future.” Finally, it offers AI Agent blueprints for companies to gain inspiration on what to build and how to go about doing so.

Meta:

Jensen had a lot of praise to give to Meta during the presentation:

"Llama 3.1 from Meta is a complete phenomenon... it is  the singular reason why just about every enterprise across every industry has been activated to work on AI."

Jensen Huang

Based on this praise, Nvidia announced the Nvidia Lama NeMotron Language Foundational Models. These are all built with Llama 3.1 and optimized by Nvidia for enterprise AI adoption. There are three models within this family: Nano for rapid and low cost token generation at lower accuracy rates, Super for a balance between speed, cost and accuracy and Ultra for the highest accuracy and highest cost use cases. Ulta was classified as a “teacher model” that can judge and assess responses generated by other models while helping them improve. The jointly-created models lead the world in categories like chat, instruction and information retrieval augmented generation (RAG) (fetching needed external info).

AI for Personal Computers (PCs):

Jensen wants to create a world where application programming interfaces (APIs) are granularly malleable to fit each individual user. He sees a laptop that is purpose-built for us, rather than the other way around. And? He thinks Windows WSL2 is “the answer for this.” WSL2 stands for Windows Subsystem for Linux 2, and is two operating systems in one. It lets a user run Linux in parallel with Windows and lets “cloud native developers use Windows to develop for the Cloud.” Furthermore, Jensen thinks WSL2 will improve virtual machine efficiency and unlock the ability to unleash Blackwell-powered compute acceleration right on a user’s device. This is what will help make creating custom APIs on a by-customer basis rational to pursue. WSL2 supports Blackwell and its Cuda suite and NIMs “out of the box.”

“Our focus is to turn Windows WSL2 into a target first class platform that we will support and maintain for as long as we shall live.”

Jensen Huang

More on PCs in the final section of this piece. 

Physical AI:

In my mind, this was the coolest part of the presentation. Physical AI is when Generative and Agentic AI move from “asking a question with a prompt to requesting an action with it.” As Huang puts it, instead of creating output tokens made of text, image, sound or video, Physical AI entails “action tokens.”

To realize this vision, Nvidia created Cosmos. This is a “world model” that is “grounded with an understanding of gravity, friction, spatial relationships and physical cause and effect like object permanence.” The models must understand all of these laws and physical object relationships with surgical accuracy to actually be viable. It must be “designed to understand the physical world,” and so it is.

This is really where synthetic data generation shines in the most dramatic way. Collecting physical data is extremely expensive and quite difficult to scale. With Nvidia, firms don’t need to collect all that much of it. If they have a critical mass, Nvidia can fill in the blanks by creating digital twins, through Omniverse, that can simulate a physical environment like a road or a factory. Importantly, as Huang put it, Omniverse (which runs on RTX Blackwell) is “physics grounded,” which can help dearly in speeding-up Cosmos’s ability to understand physical world relationships with high accuracy. It can generate hyper-realistic objects to avoid and fully-fledged simulations that mimic actual outcomes with endless variations. In turn, this exponentially improves and expedites the pace of Cosmos training to allow companies to do far more with less. Thank you, Blackwell and DGX supercomputers. 

“We really hope this does for the world of robotics and industrial AI what Llama 3 has done for enterprise AI.” – Jensen Huang saying more nice things about Meta

Jensen Huang

I’ve talked a lot in the last few months about an autonomous vehicle (AV) monopoly being highly unlikely. That’s not because I don’t see Waymo and Tesla as highly, highly capable companies here. Those are the leaders today, with Waymo ahead in my mind. Instead, it’s because I think Nvidia will vastly lower the barrier to entry for new partners like Toyota to get there as well. Maybe not as quickly… but eventually. 

Nvidia announced the “Thor” Blackwell Robotics Processor for AVs.. This comes with 20x the processing capacity of the previous generation, which was already “the standard” for the space. It is hard to close technological gaps with the leader when they’re innovating and driving improvement as rapidly as Nvidia is. In other AV news, Nvidia debuted its Drive operating system (OS). This is the first and only OS certified up to “ASIL-D” (highest U.S. standard for AV safety).

Thor

While Cosmos is a vital piece of the Physical AI equation, it’s not the only. DGX is used to initially train the AI models, AGX is used to deploy them at scale and Omniverse + Cosmos consistently use synthetic data generation and AI reinforcement learning to ensure these models are always getting better. 

  • Nvidia announced partnerships with Keon (Warehouse automation) and Accenture (system integrator) within the realm of Physical AI.

“Cosmos and Omniverse can take thousands of drives and turn them into billions of miles for mountains of training data.”

CEO Jensen Huang

Final notes:

Nvidia unveiled its latest DGX AI supercomputer (pictured below). This is called “project DIGITS” and is meant to “bring an AI supercomputer to your desk.” There was concern stemming from this report from CPU vendors like Intel and AMD and how this product could eventually take some PC CPU demand away from them. Nvidia also partnered with MediaTek to co-create CPUs and sees a large opportunity here for the taking.

  • Debuted an application that can use Apple Vision Pro headsets to be a robot’s personal twin to train them with your own movements. Just really cool stuff. I know I’m a nerd.

b. Investor Q&A

Kress on Guidance:

"Blackwell guidance [for the quarter] called for several billion and we will probably do a bit more than that. Blackwell is doing quite well."

CFO Colette Kress this week

Huang on GPU Demand Runway:

“GPU growth is likely going to be sustained long term… the trillion dollars worth of general purpose computers will be modernized over the course of, call it, a few years to get replaced with modern computers.”

Founder/CEO Jensen Huang

Huang on Quantum Computing:

“And so in the case of quantum computing, it turns out that you need a classical computer to do error correction with the quantum computer. And that classical computer better be the fastest computer that humanity can build, and that happens to be us. And so we are the perfect company to be the classical part of classical quantum.”

Jensen Huang

“We are working with just about every quantum computing company in the world. We're extending CUDA to quantum. They use it for simulating their algorithms, simulating the architecture, creating the architecture itself and developing algorithms that we can use someday. And when is that someday? We're probably somewhere between in terms of the number of cubits, order of 5 orders of magnitude or 6 orders of magnitude away. And so if you kind of said 15 years for very useful quantum computers, that would probably be on the early side. If you said 30 is probably on the late side. If you picked 20, I think a whole bunch of us would believe it.”

Jensen Huang

These words were enough to lead to 40% stock price declines in a day for pure-play quantum computing firms. It led to CEOs of some of these companies claiming Huang doesn’t understand the field. My money would be on him understanding it just fine. There may be technology today (like D-Wave) that is ready for some commercial deployment with highly limited use cases, but full-fledged quantum computing, to him, is still very far away. That’s when this becomes financially interesting and is not close to happening.

6. Headlines & Macro

a. Headlines

Bernstein upgraded Lululemon to outperform based on consumer health and an expected recovery in North America. Needham also upgraded Lululemon based on observed improvement in U.S. trends. SO IMPORTANT and so encouraging to hear.

Uber’s Dara Khosrowshahi stepped down from Aurora’s board to “focus on Uber.” This was a weird headline. I bet there’s more to this story. Uber owns more than 10% of the autonomous truck company and added to its investment just 7 months ago. Uber also struck a partnership with Delta this past week. Delta integrated it into its loyalty program and cut ties with Lyft after a nearly decade-long relationship. UberX airport rides will now contribute to Delta member rewards. Another small piece of value creation within the overarching Uber ecosystem to motivate lower churn and higher lifetime customer value.

Oppenheimer channel checks show that Zscaler is well positioned, with the “competitive concerns overblown and unproven.” It remains bullish on the name (so do I).

DraftKings and Delta announced a new partnership. I guess people needed the option to bet on how quickly their snacks would arrive and if the person next to us uses the entire arm rest or not. Priorities. On a serious note, I’m sure this will simply entail some iCasino games for domestic flights, but I’m not sure how this would work if you’re flying over states where this is illegal. More details to come.

Wedbush upgraded Shopify to overweight based on continued underlying business strength. They boosted 2025 revenue estimates by about 2% and EBIT estimates by 8%. Oppenheimer also released a bullish note on the company this week based on the same general ideas. Business is booming. Shopify is catering to the big boys. The runway is massive.

Anthropic (private OpenAI competitor that Amazon and Alphabet have large stakes in) is looking to raise another $2 billion at a $60 billion valuation.

JPMorgan lowered MercadoLibre estimates this past week due to Brazil FX headwinds we’ve discussed recently. It lowered EBIT estimates for 2025 and 2026 by 9% and 4% respectively.

At the Consumer Electronics Show (CES), AMD unveiled several AI Ryzen CPUs for Dell PCs. CES is mostly closed to the public, aside from the keynote from Nvidia that we already covered. Still, there were some great threads on social media to show off some of the coolest products from the event. I have linked to a few of them here, along with the CES Twitter page. I can’t help but feel excited for the future after looking through all of this. Innovation is cool.

b. Macro

I talked through my macroeconomic views in this week’s portfolio update article.

Employment & Consumer Data:

  • The Institute for Supply Management Non-Manufacturing Employment Index for December was 51.4 vs. 51.4 expected and 51.5 last month.

  • JOLTs Job Openings for November were 8.1 million vs. 7.7 million expected and 7.84 million last month.

  • The ADP Nonfarm Employment Change for December was 122,000 vs. 139,000 expected and 146,000 last month.

  • Initial Jobless Claims came in at 201,000 vs. 214,000 expected and 211,000 last report.

  • Average Hourly Earnings M/M for December rose by 0.3% vs. 0.3% expected and 0.4% last month. Y/Y readings were a tick cooler than expected and 3.9%. Considering the economy’s level of economic growth, this is right where we want it to be.

  • Nonfarm Payrolls for December were 256,000 vs. 164,000 expected and 212,000 last report. 

  • The Labor Force Participation rate came in at 62.5% vs. 62.5% last report.

  • Unemployment for December came in at 4.1% vs. 4.2% expected and 4.2% last month.

  • Michigan Consumer Sentiment for January was 73.2 vs. 74 expected and 74 last month.

Inflation Data:

  • The ISM Non-Manufacturing Prices Index for December was 64.4 vs. 57.5 expected and 58.2 last month.

  • Michigan 1-year inflation expectations for January were 3.3% vs. 2.8% expected and 2.8% last month. 

  • Michigan 5-year inflation expectations for January were 3.3% vs. 3.0% expected and 3.0% last month.

Output Data:

  • The Services Purchasing Managers Index for December was 56.8 vs. 58.5 expected and 56.1 last month.

  • Factory Orders fell 0.4% M/M in November vs. -0.3% expected and 0.5% last month.

  • The ISM Non-Manufacturing PMI for December was 54.1 vs. 53.5 expected and 52.1 last month.

Reply

or to participate.