• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • I select hostnames drawn from the ordinal numerals of whatever language I happen to be trying to learn. Recently, it was Japanese so the first host was named “ichiro”, the second as “jiro”, the third as “saburo”.

    Those are the romanized spellings of the original kanji characters: 一郎, 二郎, and 三郎. These aren’t the ordinal numbers per-se (eg first, second, third) but are an old way of assigning given names to male children. They literally mean “first son”, “second son”, “third son”.

    Previously, I did French ordinal numbers, and the benefit of naming this way is that I can enumerate a countably infinite number of hosts lol



  • Ah, now I understand your setup. To answer the title question, I’ll have to be a bit verbose with how I think Incus behaves, so that the Docker behavior can be put into context. Bear with me.

    br0 has the same MAC as the eth0 interface

    This behavior stood out to me, since it’s not a fundamental part of Linux bridging. And it turns out that this might be a systemd-specific thing, since creating a bridge is functionally equivalent to a software switch, where every port of the switch has its own MAC, and all “clients” to that switch also have their own MAC. If I had to guess, systemd does this so that traffic from the physical interface (eth0) that passes directly through to br0 will have the MAC from the physical network, thus making it easier to understand traffic flows in Wireshark, for example. I personally can’t agree with this design choice, since it obfuscates what Linux is really doing vis-a-vis a software switch. But reusing this MAC here is merely a weird side-effect and doesn’t influence what Incus is doing.

    Instead, the reason Incus needs the bridge interface is precisely because a physical interface like eth0 will not automatically forward frames to subordinate interfaces. Whereas for a virtual switch, that’s the default. To that end, the bridge interface is combined with virtual ethernet (veth) interfaces – another networking primitive in Linux – to each container that Incus manages. The behavior of a veth is akin to a point-to-point network cable, plus the NICs on both ends. That means a veth always consists of a pair of interfaces, where traffic into one end comes out the other, and each interface has its own MAC address. Functionally, this is the networking equivalent of a bidirectional pipe.

    By combining a bridge (ie a virtual switch) with veth (ie virtual cables), we have a full Layer 2 network topology that behaves identically as if it were a physical bridge with physical cables. Thus, your DHCP server is none the wiser when it sends and receives BOOTP traffic for assigning an IP address. This is the most flexible way of constructing a virtual network within Linux, since it has feature-parity with physical networks: there is no Macvlan or Ipvlan or tunneling or whatever needed to make this work. Linux is just operating as a switch, with all the attendant flexibility. This architecture is what Calico – a network framework for Kubernetes – uses, in order to achieve scalable, Layer 3 connectivity to containers; by default, Kubernetes does not depend on Layer 2 to function.

    OK, so we now understand why Incus does things the way it does. For Docker, when using the Macvlan driver, the benefits of the bridge+veth model are not achieved, because Macvlan – although being a feature of Linux networking – is something which is implemented against an individual interface on the host. Compare this to a bridge, which is a standalone concept and thus can exist with or without any interfaces to the host: when Linux is actually used as a switch – like on many home routers – the host itself can choose to have zero interfaces attached to the switch, meaning that traffic flows through the box, rather than to the box as a destination.

    So when creating subordinate interfaces using Macvlan, we get most of the bridging behavior as bridge+veth but the Macvlan implementation in the kernel means that outbound traffic from a subordinate interface always get put onto the outbound queue of the parent interface. This makes it impossible for a subordinate interface to exchange traffic with the host itself, by design. Had they chosen to go the extra mile, they would have just reinvented a version of bridge+veth that is excessively niche.

    We also need to discuss the behavior of Docker networks. Similar to Kubernetes, containers managed by Docker mandate having IP connectivity (Layer 3). But whereas Kubernetes will not start a container unless an IPAM (IP Address Management) plugin explicitly provides an IP address, Docker’s legacy behavior is to always generate a random IP address from a default range, unless given an IP explicitly. So even though bridge+veth or Macvlan will imbue Layer 2 connectivity to a DHCP server to obtain an IP address, Docker is eager to provide an IP, just so the container has one from the very start. The distinction between Docker and Kubernetes+Calico is thus one of actual utility: by getting an address from Calico’s IPAM, Kubernetes knows that the address will actual work for networking, because Calico also creates/manages a network. Whereas Docker has no problem assigning an IP but not actually checking if this IP can be used on that network; it’s almost a pro-forma exercise.

    I will say this about early Docker: although they led the charge for making containers useful, how they implemented networking was very strange and led to a whole class of engineers who now have a deep misunderstanding of how real networks operate, and that only causes confusion when scaling up to orchestrated container frameworks like Kubernetes that depend on rigorous understanding of networking and Linux implementations. But all the same, Docker was more interested in getting things working without external dependencies like DHCP servers, so there’s some sense in mandating an IP locally, perhaps because they didn’t yet envision that containers would talk to the physical network.

    The plugin that you mentioned operates by requesting a DHCP-assigned address for each container, but within the Docker runtime. And once it obtains that address, it then statically assigns it to the container. So from the container’s perspective, it’s just getting an IP assigned to it, not aware that DHCP has happened at all. The plugin is thus responsible for renewing that IP periodically. It’s a kludge to satisfy Docker’s networking requirements while still using DHCP-assigned addresses. But Docker just doesn’t play well with Layer 2 physical networks, because otherwise the responsibility for running the DHCP client would fall to the containers; some containers might not even have a DHCP client to run.

    If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know!

    Sadly, there just isn’t a really good way to do this within Docker, and it’s not the kernel’s fault. Other container runtimes like containerd – which relies wholly on the standard CNI plugins and thus doesn’t have Docker’s networking footguns – have no problem with containers running their own DHCP client on a bridged network. But for any container manager to handle DHCP assignment without the container’s cooperation always leads to the same kludge as what Docker did. And that’s probably why no major container manager does that natively; it’s hard to solve.

    I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.

    Since your containers were able to get their own DHCP addresses from a bridged network in Incus, can you still run the DHCP client on those containers to override Docker’s randomly-assigned local IP address? You’d have to use the bridge network driver in Docker, since you also want host-container traffic to work and we know Macvlan won’t do that. But even this is a delicate solution, since if DHCP fails to assign an address, then your container still has the Docker-assigned address but it won’t be usable on the bridged network.

    The best solution I’ve seen for containers on DHCP-assigned networks is to not use DHCP assignment at all. Instead, part of the IP subnet is carved out, a region which is dedicated only for containers. So in a home IPv4 network like 192.168.69.0/24, the DHCP server would be restricted to only assigning 192.168.69.2 through 192.168.69.127, and then Docker would be allowed to allocate the addresses from 192.168.69.128 to 192.168.69.254 however it wants, with a subnet mask of 255.255.255.0. This mask allows containers to speak directly to addresses in the entire 192.168.69.0/24 range, which includes the rest of the network. The other physical hosts do the same, allowing them to connect to containers.

    This neatly avoids interacting with the DHCP server, but at a loss of central management and it splits the allocatable addresses into smaller parts, potentially causing exhaustion in one side while the other has spare addresses. Yet another reason to adopt IPv6 as the standard for containers, but I digress. For Kubernetes and similar orchestration frameworks, DHCP isn’t even considered since the orchestrator must have full internal authority to assign addresses with its chosen IPAM plugin.

    TL;DR: if your containers are like mini VMs, DHCP assignment is doable. But if they’re pre-packaged appliances, then only sadness results when trying to use DHCP.


  • I want to make sure I’ve understood your initial configuration correctly, as well as what you’ve tried.

    In the original setup, you have eth0 as the interface to the rest of your network, and eth0 obtains a DHCP-assigned address from the DHCP server. Against eth0, you created a bridge interface br0, and your host also obtains a DHCP-assigned address in br0. Then in Incus, you created a Macvlan network against br0, such that each containers against this network will be assigned a random MAC, and all the container Ethernet frames will be bridged to br0, which in-turn bridges to eth0. In this way, all containers can each receive a DHCP-assigned address. Also, each container can send traffic to the br0 IP address, to access services running on the host. Do I have that right?

    For your Docker attempt, it looks like you created a Docker network using the Macvlan driver, but it wasn’t clear to me if the parent interface here was eth0 or br0, if you still have br0. When you say “I have MACVLAN working”, can you describe which aspect is working? Unique MAC assignment? Bridged traffic to/from the containers or the network?

    I’m not very familiar with Incus, but I’m entirely in the dark about this shoddy plugin you mentioned for DHCP and Macvlan to work. So far as I’m aware, modern Docker Engine uses the CNI plugins when creating networks, so the “-d macvlan” parameter specifies which CNI plugin will load. Since this would all be at Layer 2, I don’t see why a plugin is needed to support DHCP – v4 or v6? – traffic.

    And the host cannot contact the container due to the MACVLAN method

    Correct, but this is remedied by what’s to follow…

    Can I make another bridge device off of br0 and bind to that one host-like?

    Yes, this post seems to do exactly that: https://kcore.org/2020/08/18/macvlan-host-access/

    I can always put a Docker/podman inside of an Incus container, but I’d like to avoid onioning if possible.

    I think you’re right to avoid multiple container management tools, if simply because it’s generally unnecessary. Although it kinda looks like Incus is more akin to Proxmox, in that it supports managing VMs and containers, whereas Podman and Docker only manage containers, which is further still distinct from the container runtime (eg CRI-O, containerd, Docker Engine (which uses containerd under the hood)).


  • Movies would have people believe that the jets are there to shoot down the errant jet. During the Cold War, this was entirely plausible and did happen. But more commonly, when a fighter jet is sent to intercept an unknown aircraft – perhaps one that has entered restricted or prohibited airspace – it may be just to have eyes on the situation.

    Airspace is huge. The vastness of the air is like the vastness of the sea. Sometimes it’s an advantage because there’s fewer things to hit. But on the flip side, if an aircraft needs assistance, there might not be anyone for many miles in any direction. As for what an assisting fighter jet can do, the first is to establish navigational accuracy. History has shown that airplanes can get lost, and sometimes unfortunately end up hitting mountains or running into known obstacles or weather. A second aircraft can confirm the first aircraft’s position, since two separate aircraft having navigational problems is exceptionally rare.

    The next thing is having eyes on the outside of the aircraft. Things like a damaged engine on a jetliner aren’t visible to the pilots, but there’s a chance the passengers or cabin crew can look. But damage to a rudder is impossible to see from inside the aircraft; I’m not yet aware of a commercial aircraft equipped with a tail-viewing camera. Checking the condition of the landing gear is also valuable information, if a jetliner has taken damage but still aloft.

    Finally, if it should come to it, an assisting aircraft can be the pilot’s eyes, if for some reason the pilots can no longer see out their windscreen. At this point, the flight may already be close to the end but it may help avoid additional casualties on the ground. I’m reminded of the flight where volcanic ash sandblasted the windshield, or when a cargo jet had a fire onboard which filled the cockpit with thick smoke.

    To be clear, neither incident was aided by fighter jets, but having an external set of eyes to give directions would have made things a little bit easier for the pilots. Other aircraft besides fighter jets can provide assistance, such as any helicopters or private pilots in the area. But of course, fighter jets are on-standby and can get to a scene very fast.



  • IANAL. In the USA, the majority of US States adopt some definition of murder based on the age-old definition from English common law. But each state modifies the definition to include or exclude things, to the point that discussing even just a single state’s definition would be a mini law course. However, some generalities can be drawn using just the age-old definition.

    Murder is generally defined as having four elements, or components which the trier-of-fact (eg a jury) must find in order for culpability to attach. Attempted murder is the absence of the fourth element. This is not rigorous, since again, we’d have to identify the exact jurisdiction and the question didn’t indicate one. Anyone who has:

    1. Has performed or omitted some act…
    2. Which is the proximate cause of death…
    3. With malice aforethought…
    4. And the victim dies…

    Is guilty of the crime of murder. As a minor discussion of these points, the first element means that positively doing something (eg cutting a safety strap) and not doing something (eg not turning off the electricity to exposed wires) can be parts of a murder charge. For the second element, the term “proximate cause” is a legal term deeply entwined with “foreseeability” and whether a chain of causation or liability connects the act with the death. A Rube Goldberg-esque manner of death might fail the proximate cause element, unless the setup was purposely concocted precisely to kill. Likewise, proximate cause isn’t always the last element in a chain of events, since that would mean a victim would be their own killer for walking into a sniper’s bullet.

    The third element, malice aforethought, refers to the mental state of the accused. That is, did they genuinely intend great harm and/or death upon the victim. Different jurisdictions vary on whether an intent-to-merely-assault that leads to death can be charged with murder, and often times that’s what second-degree murder is used for. Mental state is not a binary quantity either, as different “levels” of mental state correspond to different charges, all else the same. Malice aforethought is the worst sort, corresponding to a killer that plans a victim’s death, or acts with utter disregard for any victim’s life. Lesser levels might be charged as “reckless homicide”, “negligent homicide”, etc.

    Finally, the fourth element for murder is that the victim must actually die. If the victim is immediately dead and this is verifiable using the body, this is easy to prove in court. But if the victim lingers, the legal jurisdiction might adopt a “year and a day” rule, since if the victim doesn’t die quickly, then it’s assault/battery rather than murder. Or if the victim is believed to be dead but it can’t be proven – eg victim’s body never recovered – then the defense might try to argue that the victim suffered only a flesh wound, and is simply missing but alive.

    </ background>

    OK, so to the question. You’ve described a scenario where someone has: 1) affirmatively pressed the kill button, 2) which is believed to result in person X’s death, 3) with full intention to kill person X, but 4) person X does not die. At even a passing glance, this is not murder since person X is alive. But does it meet the first three elements to support attempted murder? Probably not, at least without additional details.

    Element #1 and #3 are present, but it’s element #2 that will be problematic. It isn’t sufficient to just tell someone that “yes, this button will absolutely kill person X”. At the very minimum, the accused needs to at least be aware of the mechanism that person X will be killed, and how that relates to the “kill button”. An implied method-of-death would suffice, such as when ordering a skilled archer to assassinate a rival. Even though the accused just says “go kill him”, the accused is aware that the archer is capable of killing using their bow-and-arrow. Whereas ordering a toddler to kill the rival would be presumed as nonsensical.

    If, however, the button was already demo’d to the accused as killing some other (pretend) victim first – meaning the accused has seen the manner that the “button press” leads to death – that might establish proximate cause, even if it’s not obvious what the cause of death was. If the pretend victim clutches their chest and falls down, it’s plausible to the accused that the button’s mechanism somehow involves a pacemaker malfunction. If instead the accused is told specifically that the bombs on the victim’s car will go off, then that’s a more solid establishment of element #2, although even bombs do not reliably detonate.

    But there’s even more: just because a set of circumstances arguably meets the three elements for attempted murder, it’s ultimately the trier-of-fact that will have to believe it. That is to say, it would be tough to convince a jury that the accused had “absolute” certainty that the button would kill, which also affects element #1. Whatever convinced the accused that the button is genuinely may not be convincing to a panel of jurors. Unless the accused voluntarily admits to that fact after-the-fact in court, that is tough to prove. What is illegal according to the elements of a crime is not the same as what will easily convince a jury.

    If it seems like this element #2 – or really all the elements – of murder are fact-intensive, that’s because they are. Murder is not as clear-cut as a parking ticket. Killing is as old as humans are, and how it’s been performed and how it’s been regulated/abolished has evolved over history. Modern legal scholars have to figure out how things like stochastic terrorism/killings or life-affecting afflictions (eg HIV/AIDS) should be fitted into the system of written law, because modern law requires writing down the crimes beforehand.



  • My primary complaint with the F-type connector is that it only does half the job: a proper connector should make a reliable and consistent mechanical and electrical coupling. For the latter, the F-type fails miserably, on account of having no protruding pin of its own: reusing the center conductor as a “pin” is at best slapdash, and at worst fails to account for inconsistent conductor cross-sections.

    When affixing an F-type connector onto a new segment of coax, unless great care has been taken to slice the cable cleanly, the center conductor often ends up with a arrow-shaped tip which also flattens the round cross-section into an oval. This tip is now a minor danger to people, in addition to no longer being assumed as round. This certainly doesn’t help with reliable mating later.

    Furthermore, a solid copper tip is not ideal for a connector, unless the opposite coupler that grasps the tip is made of copper as well. But copper can’t be used to make springy receivers, so inevitably another metal must be used. But the prevailing composition of contacts for connectors are either solid brass or are plated (eg gold). But a sharp copper tip will end up scratching the mating surfaces over time.

    And this is just the start of the F-type’s follies. The user experience of turning a 7/16" fine thread in narrow spaces is exhausting. With no consistent specs for the F-type, some cheaper connectors have the thinnest possible hex head to fit a wrench on. Compression F-type is better, but then we have to compare to other connectors.

    In the broadcast and laboratory spaces, BNC is the go-to connector, with easy mating and quarter-turn engagement. It also comes in 50 and 75 Ohm variants (albeit confusingly). In telecoms, the SMA connector is used for its small size, and larger coax might use the beefy N connector. Some of these variants are even waterproof. Solderless is an option. All these connectors are rated by their manufacturers for a minimum number of mating events.

    In all circumstances, according to this chart, the RF performance of BNC, SMA, and N are superior to F-type, which has only ever been used for TV, CCTV, and certain low-frequency clocking systems. I’m not sure what you mean by “rated to absurd frequencies”, but surely SMA’s (up to) 25 GHz rating would be tremendously and wildly insane in comparison to 1-2 GHz for F-type.

    So that’s my beef. It’s just a bad connector, used only because it’s cheap.


  • Or… we could just make appliances that are tolerant of the world’s different AC voltages. The world’s commercial electric grids only use a handful of voltages, and they’re all between 100-240v. Compressing the list by removing voltages which are within less than 10 volts, the list is quite short: 100v, 120v, 230v, 240v.

    That’s all there is. And it’s exactly why most USB phone chargers list their input voltage as: 100-240. Today’s modern switch-mode power supplies can properly tolerate any of the world’s voltages, as long as you adapt the connector. The voltage side of things is mostly solved, except maybe for cheaper, motor-driven devices. But even that is changing to use inverter technology that can take almost any voltage.


  • I’m not sure how hard you’re rotating a 3.5 mm cable, but yes, that sound is the sudden making and breaking of the contacts, which it’s not meant to do. It will wear down the surfaces, even if the 3.5 mm tip is gold plated, since the gold is for anti-corrosion not for anti-friction.

    But, the notion of cylinder housings for connectors has not died. After all, large cylinders are easy to grasp. Here is one very beefy example, often called the California Standard connector due to its use for Hollywood movie productions. This is a waterproof, twist-lock connector that also suppresses arcs if you unplug it while it’s still on. It can only connect in one orientation, so you keep rotating around the center pin until it slots in. It’s heavy enough to probably also double as a blackjack for self-defense lol

    Hubbel California standard connectors CS6365 and CS6364


  • In a sense, we already have one. And it’s used on the vast, vast majority of desktop computers, it’s the standard for removable cords on electric kettles around the world, and it shows up in all data centers. I’m talking about IEC 60320, sometimes just called the “IEC connectors” or for one very specific connector, the “PC plug”.

    Some IEC 60320 couplers

    For the task of attaching AC power to an appliance, this is probably the one with the greatest adoption worldwide. And there absolutely could be a wall-mounted version of these, the same way that datacenters essentially have power strips – ok, they’re RPCs lol – with these connectors.

    Their only noticeable drawback is that the voltage can be anything up to 250v. So plugging 120v appliances into an Italian 230v outlet would be bad. But this family of connectors – formally called “couplers” – was meant to match current-capacity, where a mismatch would cause a fire due to overload. It’s still the user’s responsibility to check the voltage, in the same way that buyers have to check the type of battery they need for a remote control (eg AA vs AAA).


  • A cylindrical connector would be fine for connecting one or two conductors. But more than that and it starts to become a nightmare to design, and even worse to build and use reliably. Classic examples include the venerable RCA connector, the BNC connector for radio signals, and IMO the worst connector to ever exist, the F-type connector used for TV coaxial cable.

    With just two conductors, a cylinder can have have a concentric shape, where the inside is a pin and the outside is a shell. But you’ll notice that although all these connectors are circular, they’re hardly designed to rotate while attached. You generally have to remove or at least loosen them before trying to turn them. Or you still try it and the TV picture might flicker a bit. The problem is one of electrical contact.

    The engineers that make connectors go through painstaking efforts to get the conductive surfaces to align – or “mate” as they say – because if they don’t, the signal quality drops like a rock. It’s already hard enough to get cheap connectors to reliably align, but now you want them to move relative to each other? That’s tough to build, and moving surfaces will eventually wear down.

    Even worse is that circular shapes tend to have poorer mating, because manufacturing tolerances for curves is wider than tolerances for flat surfaces. We actually don’t want to make round contacts, if a rectangular shape would suffice. Flat contacts are simpler to produce and generally more reliable [citation needed].

    But even more intractable is the matter of matching the pinouts. Here is the pinout when looking at the connector of a USB C cord:

    USB C pinout when looking straight at a USB C cord

    Even without understanding what each pin does, it’s noticeable that certain pins are the same whether you flip the connector over. In fact, they even label them that way: pin A12 on the top-right is also B12 on the bottom-left. The most damaging scenario is if USB 5v power was sent down the wrong pin, but it’s very clear that the VBUS pins – which are the 5v power – will always be in the same place no matter the cord orientation.

    The only pins which are different upon inversion are the data lines – anything with a + or - in the name – or certain control signals which are intentionally paired with their opposite signal (eg CC1 and CC2). The USB C designers could have packed way more data pins if they didn’t have to duplicate half the pins to allow flipping the connector over. But that design choice has made USB C easier to use. A fair tradeoff.

    And that’s the crux of it: in engineering, we are always dealing with tradeoffs, either for performance, cost to produce, ease of use, future compatibility, and a host of other concerns. Wanting a cylindrical connector could certainly be a design goal. But once it starts causing problems with alignment or manufacturing, there will inevitably be pushback. And it’s clear that of all the popular connectors used today, few are cylindrical.

    Heck, even for DC power, the barrel connector has given way to more popular designs, like the Anderson PowerPole or the XT family of connectors, because the market needed high-current connectors for drones and Li-po batteries. Granted, the XT connectors are basically two cylindrical connectors side-by-side haha.


  • Starting with the title question, US States are bound by the federal constitution, which explicitly denies certain powers to the States, found mostly in Article I Section 10. The first clause even starts with foreign policy:

    No State shall enter into any Treaty, Alliance, or Confederation; grant Letters of Marque and Reprisal; coin Money; emit Bills of Credit; make any Thing but gold and silver Coin a Tender in Payment of Debts; pass any Bill of Attainder, ex post facto Law, or Law impairing the Obligation of Contracts, or grant any Title of Nobility.

    In this context, the terms “treaty, alliance, or confederation” are understood to mean some organization which would compete with the union that is the United States of America. That is to say, a US State cannot join the United Kingdom as a fifth country, for example. Whereas agreements between states – the normal meaning of “treaty” – is controlled by the third clause, which refers to such agreements as “compacts”.

    No State shall, without the Consent of Congress, lay any Duty of Tonnage, keep Troops, or Ships of War in time of Peace, enter into any Agreement or Compact with another State, or with a foreign Power, or engage in War, unless actually invaded, or in such imminent Danger as will not admit of delay.

    Compacts are only allowed if the US Congress also approves. This is what allows the western US States, the federal government, and Mexico to all agree on how to (badly) divide the water of the Colorado River.

    So if foreign policy is meant to include diplomatic relationships, military exercises, setting tariffs, and things like that, then no, the US States are severely constrained in doing foreign policy. The diplomatic relations part is doable, where state-elected officials can go to foreign countries to advocate for trade and tourism. But those officials must not violate the federal Logan Act, which prohibits mediating an active dispute involving the USA, since that’s the US Secretary of State’s job. For example, it would be unlawful if a US State governor tried to mediate a prisoner exchange with a country that the USA has engaged the military against.

    For your other question about what US States are, the answer to the question changed significantly in the 1860s. During that decade, the federal constitution gained three amendments, with the 14th Amendment being the most significantly for the notion of statehood. That Amendment’s Equal Protection and Due Process Clauses gave life to the notion of “incorporation”, which is that the US Constitution’s limits on the federal government also applied to the several States.

    Before the 1860s, US States were indeed closer to countries in a trading, monetary, and foreign policy alliance. Some US States even had official religions, since the First Amendment’s prohibition on endorsing religion only applied to the federal government. But post 1860s, it was firmly established that the federal government isn’t just some economic committee, but an actual representative body, one whose laws will trounce state laws.

    The best example I can point to is how broad the federal government exercises its “interstate commerce” powers. Basically, if something has anything remotely to do with crossing a state border, the feds can write laws on that topic. That was extremely rare pre 1860s, and now it’s basically the norm. The postal service is one such activity which is explicitly and wholly a federal matter, written into the initial Constitution. But now, airspace, telecoms, and railroads are all matters which the federal government asserts its authority via the “interstate commerce” powers, and if US States were countries, they might object to the feds. But they’re not countries, so they don’t wield that power.



  • Firstly, and it’s honestly a minor issue, I think your question will draw more answers if it had a title that at least mentions the crux of the question, that is “what is a western style room/home?”.

    Anyway, answering the question, the distinction of a western-style room, home, hotel, bathroom, suit, or even envelopes is a description generally used only in contrast to the “global norms” that are Western-world designs. So far as I can tell, this isn’t (usually) rooted in any sort of bias against the non-Western world, but rather a helpful if coarse indicator about what things will look like.

    To that end, classification as western style is mostly going to appear in places where that is not the norm or is not endemic to the given place. Japan is a good example as the island nation continues to have its own designs that remain popular, while having imported a great number of western ideas since the Meiji Restoration in the mid 1800s.

    Whereas the distinction as western design isn’t very useful when all relevant design options already stem from western approaches. Take for example the slender and tall townhomes common in the Netherlands. If such a townhome were constructed in San Francisco, calling it a western design is terribly unhelpful, as a standard townhouse in San Francisco would already be of American (and thus western) design. Rather, that home would be described as “Dutch style”, to contrast against the standards found in SW America, which hews closely with standard American construction but with notable Spanish influence, such as tile roofs and verandas.

    The distinction also doesn’t help when comparing forms that most wouldn’t even find comparable. So an alpine cabin (a cold weather, western design) is not comparable to an Alaskan Indigenous igloo despite both being a home or dwelling. There must be at least some similarity before drawing the destination of western or eastern or whatever design.