• 0 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • Lower storage density chips would still be tiny, geometry wise.

    A wafer of chips will have defects, the larger the chip, the bigger portion of the wafer spoiled per defect. Big chips are way more expensive than small chips.

    No matter what the capacity of the chips, they are still going to be tiny and placed onto circuit boards. The circuit boards can be bigger, but area density is what matters rather than volumetric density. 3.5" is somewhat useful for platters due to width and depth, but particularly height for multiple platters, which isn’t interesting for a single SSD assembly. 3.5 inch would most likely waste all that height. Yes you could stack multiple boards in an assembly, but it would be better to have those boards as separately packaged assemblies anyway (better performance and thermals with no cost increase).

    So one can point out that a 3.5 inch foot print is decently big board, and maybe get that height efficient by specifying a new 3.5 inch form factor that’s like 6mm thick. Well, you are mostly there with e3.l form factor, but no one even wants those (designed around 2U form factor expectations). E1.l basically ties that 3.5 inch in board geometry, but no one seems to want those either. E1.s seems to just be what everyone will be getting.




  • There’s a cost associated with making that determination and managing the storage tiering. When the NVME is only 3x more expensive per amount of data compared to HDD at scale, and “enough” storage for OS volume at the chepaest end where you can either have a good enough HDD or a good enough SDD at the same price, then OS volume just makes sense to be SSD.

    In terms of “but 3x is pretty big gap”, that’s true and does drive storage subsystems, but as the saying has long been, disks are cheap, storage is expensive. So managing HDD/SDD is generally more expensive than the disk cost difference anyway.

    BTW, NVME vs. non-NVME isn’t the thing, it’s NAND v. platter. You could have an NVME interfaced platters and it would be about the same as SAS interfaced platters or even SATA interfaced. NVME carried a price premium for a while mainly because of marketing stuff rather than technical costs. Nowadays NVME isn’t too expensive. One could make an argument that number of PCIe lanes from the system seems expensive, but PCIe switches aren’t really more expensive than SAS controllers, and CPUs have just so many innate PCIe lanes now.




  • The lowest density chips are still going to be way smaller than even a E1.S board. The only thing you might be able to be cheaper as you’d maybe need fewer SSD controllers, but a 3.5" would have to be, at best, a stack of SSD boards, probably 3, plugged into some interposer board. Allowing for the interposer, maybe you could come up with maybe 120 square centimeter boards, and E1.L drives are about 120 square centimeters anyway. So if you are obsessed with most NAND chips per unit volume, then E1.L form factor is alreay going to be in theory as capable as a hypothetical 3.5" SSD. If you don’t like the overly long E1.L, then in theory E3.L would be more reasonably short with 85% of the board surface area. Of course, all that said I’ve almost never seen anyone go for anything except E1.S, which is more like M.2 sized.

    So 3.5" would be more expensive, slower (unless you did a new design), and thermally challenged.


  • Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.

    Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,

    The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.


  • Not enough of a market

    The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.

    3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.

    Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.


  • It depends on where you are.

    Where I live, that number might be more like 1%. At My parents place, it’s more like 95%, based on the number of Trump signs that have continuously stayed in yards since 2016.

    There was a party a little ways into rural territory and a lot of us went and the hostess was terrified when we started talking bad about Trump because the window was open and the neighbors were hard Trump people with guns.



  • I’ve got mixed feelings on the CHIPS act.

    It was basically born out of a panic over a short term shortage. Like many industry observers accurately stated that the shortages will subside long before any of the CHIPS spending could even possibly make a difference. Then the tech companies will point to this as a reason not to spend the money they were given.

    That largely came to pass, with the potential exception of GPUs in the wake of the LLM craze.

    Of course, if you wanted to give the economy any hope for viable electronics while also massively screwing over imports, this would have been your shot. So it seems strategically at odds with the whole “make domestic manufucating happen” rhetoric.





  • While they have to be careful, there can be reasonable ones to help what they do/stop doing.

    Example, “x% of telemetry enabled users enable the bookmark bar”, not particularly useful for harmful purposes, but if it were 0.00%, then they know efforts accommodating the bookmark bar would be pointless. Not many users would go out of their way to say “I don’t use some feature I’m ignoring”, and telemetry is able to convey that data, so the developer is not guessing based on his preference.

    That being said, the telemetry is so opaque that it’s hard to make an informed decision as to whether the telemetry in question is risky or not. Might be good to have some sort of accumulated telemetry data that you can click to review and submit, and have that data be actually human readable and to the point for salient points.


  • Note your comment states 34 percent disagree, but didn’t state how many didn’t care one way or the other, or rate the relative likelihood of spending money or refusing to based on the stance.

    The snowflake conservatives may be more likely to get offended and boycott. It may be that 50 percent didn’t care either way, so the trade off becomes 34 versus 16 percent. I suppose from another comment it seems like their poll returned approximately 1/3rd disliked, 1/3rd wanted, and 1/3rd didn’t care. So they can probably have their cake and eat it too by waffling a bit on the issue to have both sides buying up their preferred content to “prove” to Disney the right way to go, even if they boycott the current “wrong” mindset, they may want to spend on their favored content to steer things. But those thirds still don’t reflect the nuance of how much they like/dislike to know if liking LGBT inclusion is a “good on them, even though I’m not that personally invested” or more “I will not give them a dime if they don’t have inclusion”.

    Of course they may also be concerned about their place in a potential authoritarian state and aligning themselves is a way to avoid being a target. If after all of the doom and gloom turn out to be wrong and progressives come back in a few years, then they just have to have their logo be rainbow colored for a bit and have a few progressive characters and all will be forgiven.



  • In addition to what others say, for me the biggest sin is just how maddeningly slow it is. Trying to scroll a conversation back in time is just miserable. Coming from approaches where scrolling arbitrarily back in history has felt pretty much instant for over 20 years, it just feels horribly backwards. The reasons for that sluggishness is that it’s just a terrible design vaguely wearing a passable layer of paint to make it look approachable.