I love that you're writing about tests, because IMO too many people don't realize the value. But. I'm going to take issue with some of your points.
You say you should only test through the public contract and you should have more unhappy paths than happy paths. Let's do a thought experiment to see why those two things may not mesh well.
Let's imagine that you have an API call that returns results from an API you don't control. The API has 4 major parts, and each of these parts can contain an empty string, an object, or an array of objects that look like the single object. In each of the four parts, some pieces of the object are the same among all four parts and some are different.
So, logically, you may write a function that first will replace the piece of API return with an empty string, wrap it in an array if it's an object, and leave it the same if it's an array. Then you might have a function that formats the shared pieces of the elements, that might or might not be composed into functions that handle the pieces that differ.
To test this through the public interface, to check all the unhappy paths, you're multiplying all the work. Whereas if you test your common formatter, you only need to have 2-3 test cases that will cover happy and unhappy data, then the item formatters you can test with the known outputs of the common formatter you get from the tests you just wrote, plus happy/unhappy data for the specific pieces, and on up the chain until finally all you need to check is if you give it all four parts empty strings, single objects, and arrays, does that work?
And then since you have absolute confidence in how all those little formatters work, you can have a single raw test data file (which of course you have been using in your business layer tests) and just run the formatters on your data to use for other tests, including in the UI.
A side benefit of this is that seeing and testing all these little pieces in isolation gives you all kinds of ideas how they could be recombined internally, or it could be that you then realize you should formally expose them and use them more widely. Or just used differently. I've been reading the tests for Redux connect today, and seeing the facile way they manipulate the inputs and outputs gives me all kinds of ideas for how I could use that function that aren't obvious from the docs.
I also disagree with you about not writing mocks/stubs. When you mock something, you know 100% what that dependency is, so there's not so much guesswork in your test (did that real thing change the data to a format you forgot about?) You also know for sure where the issue is. If the input is right and the output is wrong, well for sure the issue is your code. But if you're not clear what the input even is and the output is wrong, well maybe the input isn't what you thought? You can't be sure the issue is your code.
And a lot of time, there's a lot of magic going on between different layers of code and even the person who wrote it finds it hard to remember what a given input will look like after it's been through all this magic. It is VERY helpful to have the input constructed right there next to the test so you can see it with all the magic stripped away.
Also, I think writing a ton of mocks gives you dev superpowers, because you have to understand all your dependencies really well in order to right just enough of them to be able to use in your test but not enough where the failure could be in your mock code.