That was so even before broadband. Microsoft emplyees have always made thier software available before MS can get it to the end of the production line. The competition is how fast they can get it out, was it XP they had to pull from production when they discovered that Tywan was listed before China, caused a lot of embarresment.
Isnt the back bone of the web unix based, very very raerly re-booted, that would be a novelty for windows users.
No matter how good the protection manufacturers use, there are better coders out there than they employ or will ever employ. I have never seen a failure to date.
A lot of coders disagree with MS and support the open licence, which is why MS does everything it can to throw a spanner in the works. local and government depts being advised to use open source rather than MS products, think of the revenue it would loose. But of course having the so-called IT people to use it is the biggest problem to it's use.
I look at it no diffrent to OOF a collection of people with the same interest. Coders are no different they live & breath coding, they reverse engineer all windows products and will show you where DOS still plays a part they know the product better than MS. It is truly amazing to see what a rushian can do with even a 282 absolutly truly unbelivable.
And no I am not a coder but still prefer the command line.
Yes, piracy existed long before broadband, even the internet. Fast connections have given it a much larger footprint.
As to Unix and the Internet, yes (proper) Unix is used extensively. Yes, I support many servers with uptimes of 4yrs plus (rarely given maintenence windows

). Remember, though, that infrastructure servers will be running a minimal amount of software, perhaps for example, the kernel and Bind.
4yr uptimes is wrong. Utterly wrong. No OS is without flaws, and an OS that is running 4yrs out of date must be vulnerable to a whole host of remotely exploitable flaws.
Saying that, my brothers old NT4 server was up for over 4yrs (as a closed system, patching less important), and his Windows 2000 Server systems likewise, until it all became Internet connected and hence (usually) monthly reboots for the patching.
Windows uptime is let down not be the OS, but by the applications. Most of the time you can kill the misbehaving app, but sometimes you need to reboot. No different to Unix.
Note, I have specifically not mentioned Linux, as that is an entirely different beast, and shows that disorganised development is not a recipe for stable software.
Agreed, every protection system is overcomable (temporarily), but thats the point of WGA. A determined person will keep cracking it every time. The people who won't pay for anything will keep searching the virus/malware ridden crack sites for cracks. Most of Joe Public will not bother and get a licence. As said, small PC shops don't dare to put on cracked copies, as the next set of updates will show that.
The Open Licence (forgetting MS Open Licencing scheme which is something different), The Free Software Foundation, and the GNU Pulic licence etc does not mean 'free' software, as in zero cost. 'Free' means something entirely different in this case. It is still against the law to copy, use and distribute it, unless it specifically grants you that permission. That is how the main Linux distributors make their money. Many of the poorer distros are 'free', but the likes of Redhat etc aren't. A lot of confusion exists over 'free' software, as people do believe they can do what they like, distribution wise.
Granted, there is an awful lot of truely free software from the open software movement, some of it very good. The problem is (from a desktop view - server admins generally want to pay for their software), for an entirely free software platform, much of it relies on Linux as the kernel, and is built up with flakey X11 implementations, and awkward window managers. The advantage Microsoft has is being a single corporation, they have made this reliable, tight, fast, and about as user friendly as you can get (they have the money to pile into user testing, rather than the user interface a geek developer wants).
I am a (amateur) coder. Bit rusty when it comes to assembler now, but could pick it back up if I wanted. But the days of proudly saying what your code can do in 1000 bytes are gone. Too difficult to debug. A Windows app (users do not want to use command apps anymore) needs about 1500 - 2000 lines of C++ code to display the window (Linux window apps no different), so libraries are used. Libraries tend to be generic, so any dragged in function will contain more code than needed by that specific call, and so the bloat begins. But a little bloat doesn't matter, as we are not confined to 640k RAM split into 64k chunks, and floppy disks any more. Additionally, the actual code for most apps is quite small, but the icons, graphics, and other resources pad out the .exe file. There is no need, on desktop/server systems to assembler it, unless a particular function needs speed (and even then, the modern compilers are good enough). Why spend weeks doing it the hard way when the same can be achieved, for example, with a .NET programming package (either Visual Studio or the freebies (or even the freebie Visual Studio Apps!)). Now embedded systems, thats a different kettle of fish, and often does require the use of machine code.