🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

The Big Computer Tale

Published August 28, 2006
Advertisement
OK, I promised that this was coming, so here it is: the full tale of adventure, from the beginning to the end. I'm too lazy to dig up links to individual events and the corresponding entries here; if you're genuinely bored enough to care, help yourself to the archives.


It all started at some point in the past. I don't remember what point that was (didn't I just get done telling you that I'm too lazy to look it up?). Anyways, at that point, I was running an old Athlon XP 2400+ on a crippled 512 MB of RAM. I had somewhere in the neighborhood of 70GB of hard drive storage, split across two aging and creaky IDE hard drives. The only really interesting bit in the system was the gleaming GeForce4 Ti4800... which was an absolutely awesome piece of hardware, back 4 years ago when I built the machine originally.

Somewhere along the line, it struck me that my normal development habits just didn't jive with this particular piece of hardware. At that point, "normal development habits" were 1 or 2 instances of VS2003, up to half a dozen browser windows, a couple of instances of Notepad, and whatever other incidentals - IM clients, etc. etc.

The easy fix at the time was to boost up to 1GB of RAM; this had a marked and much appreciated benefit, since it eliminated an awful lot of slow, noisy hard-disk swapping. However, it didn't really solve the core problem.


Eventually, I moved up to VS2005. The average number of browers climbed a bit. It is now common for me to run 3 instances of VS at a time. I was trying to run (and debug!) a next-gen 3D game on hardware that barely scraped minimum spec for the fully-optimized retail version; clearly, the stresses of running a development environment and a very demanding, unoptimized game were just too much.

The problem is, I'm very tight-fisted. I hate spending money on things I don't absolutely need. Financial frivolity is not one of my traits. So, for quite a while, I just sucked it up and slogged along on my slow-ass machine, trying to convince myself that it wasn't really a big deal.


Then, in the vicinity of June of this year, I found the truth. (Incidentally, truth comes in a shape that bears an astounding resemblance to an Asus W2JB laptop.) The truth includes a dual-core CPU and a real, honest-to-God decent video card. I needed the truth to do some development while travelling - something that my old laptop (a Celeron 1GHz with 128MB of RAM) quite obviously just wasn't going to be able to handle.

It took roughly three seconds for me to become permanently addicted to dual-core power for development. (I'm still not really sold on it for gaming, but that's another rant altogether.) Our game is split into two projects, one that uses the C++ compiler in Visual Studio, and one that uses a proprietary compiler toolchain. Dual-core means I can do a full rebuild of the game and compile both halves simultaneously. (Or, more commonly, it means I can compile both in series on one CPU core while I faff about on the spare core...)


I returned from my travels, smitten with the beauty of dual-core, but still not entirely enlightened. You see, my first and true love was with another dual commodity - dual monitors. I felt dirty and evil, betraying my fidelity to two monitors for the illicit lust-fest of two CPU cores.

Thankfully, I eventually figured out that I was being a moronic Puritanical idiot with my technology, and embraced a more progressive view on relationships. For a time, I happily used two monitors and two CPU cores, by connecting one of my LCDs to my laptop.



That was a pretty happy arrangement, but only as a temporary fix. In fact, I'd only decided to try it in the first place because I'd already committed to building a new development PC. The sticky part was, I was going to wait for the new Conroe processors to arrive, as the word on the street was that AMD would slash processor prices once Intel's new powerhouse chips were available.

So I contented myself to wait for the fateful day of Conroe release, hobbling along on the laptop. Unfortunately, it's just not comfortable; the laptop's screen runs at 1440x900 while the second display runs at 1280x1024, leading to weird aspect ratio glitches. It also means having a laptop chew up valuable desk space which I normally use for hand-written notes and sketchings. All in all, it satisfied my baser technological thirst, but it just wasn't convenient.


Fortunately, new hardware was on the way... I was just about to order parts for a nice AM2 system and hunker down to get my discounted Athlon X2 CPU once the Conroe date passed. On a whim, I decided to check some Conroe benchmarks, just out of curiosity.

If you don't already know just how soundly the X2 line gets its ass kicked by Conroe... well, then, you probably haven't been able to make heads or tails of all the hardware gibberish I've been saying all this time, so who cares.


That was the clincher; I decided to go LGA775 instead of AM2. It would be only the second Intel machine I'd bought in over 5 years (the laptop being the first) - and the first Intel machine I ever built myself. After some fond farewells to AMD (ok, it was a rude hand gesture) I sank my teeth soundly into the bullet, and bought the parts.

Now all I had to do was wait for the CPU itself to become available.


Easier said than done, as it turns out. For whatever reason I missed the actual release of the CPU by several days - I was asleep, or drunk, or abducted by aliens, or whatever. Can't recall. Anyways, by the time I got around to finding a place to actually buy a CPU, they were already sold out. I missed another couple of release cycles, including a pretty major reshipping on August 16.

I was close to despairing of ever finding a real, live E6600 CPU for sale anywhere. I tried one store who claimed they were in stock, but they ended up being a bunch of crap heads. (See my complaints from several days ago for the lowdown on just how evil they were.) At last, I gave in and just bought an E6400 from NewEgg.

... Lo and behold, a small time later, I realized that NewEgg also had E6600's in stock. So, naturally, I bought one. For good measure, I got a cheap Athlon64 3700 to replace the 3200 in my gaming machine. The stingy-bastard half of me still isn't speaking to the techno-lust half of me for dropping all that money, but damned if the techno-lust half isn't one happy mo-- well, Samuel Jackson knows what I mean.


Anyways, the E6600 finally arrived. I don't recall when, since I live in a sort of blurry state and pay very little attention to things like days, time, time of day, or daytime. It was time to build the box.



I'd already decided that the new machine would be named Umaro (fitting the trend of FF6-themed characters). I got a nice, sleek, silver XBlade case, some OCZ RAM with silver heatsinks, hooked up my silver Saitek keyboard, and so on. It looks pretty sharp.

Assembly was pretty much boring; the only sticky moment was trying to figure out how to get the LGA775 heatsink back off of the motherboard to check on the thermal paste coverage. That was a few minutes of barely-tempered panic, until I read the manual and figured out how to release the anchor pins.


From there, it's pretty much been a breeze. I'm still installing SDKs and such, but I'm confident that compiles and edit/compile/run-game turnarounds will be even more astoundingly fast than on the laptop.

Naturally, some annoying crap with Windows activation happened. I actually had enough activations until I installed new motherboard drivers, at which point the dreaded "You changed your hardware you evil bastard" dialog appeared. Moral of the story: install all drivers, service packs, and important devices before activating Windows. So I'll have to call the nice hotline sometime and get that sorted out.


The only weak point in the machine right now is the GPU, which is a Radeon X800XL. That GPU was previously in my gaming machine, Celes; I bought a GeForce 7900GT to put in the new system, but just couldn't bring myself to leave all that awesome power untapped. Since I do all my PC gaming on Celes (because it's hooked up to my LCD projector), the only thing I'd ever really run on the 7900GT would be development stuff... and I already know the X800XL is more than capable of handling that with ease.

So now my development machine is vastly more potent, my gaming machine got a solid CPU and GPU upgrade, and I've got lots of spare parts available. My entire old dev box (Phoenix) is now awaiting decommissioning (once I get all my data off it); I've got two spare CPUs sitting around (the E6400 and the old A64 3200 from Celes); and in the process I discovered a gutted old 486 that used to serve as my router.


All in all, it's a pretty good deal. I have a spare and still decent box already built; I can build a couple more systems relatively cheaply; and if I can find a working AT power supply I'll be well on my way to getting the "old-school" gaming machine I've been wanting to build for a few years now.



And to get all that digital joy, all I have to do is go without eating for the next 438 years. Seems fair.
Previous Entry Ahhh, bliss!
Next Entry TEH EVIL PLANS!
0 likes 3 comments

Comments

Ravuya
ScienceMark that son of a bitch; I want to see some raw numbers.
August 28, 2006 09:17 AM
Muhammad Haggag
Ah, now that's a satisfying post. I have one noobish question, though, regarding this:
Quote: Dual-core means I can do a full rebuild of the game and compile both halves simultaneously. (Or, more commonly, it means I can compile both in series on one CPU core while I faff about on the spare core...)

The dream of doing such a thing has been tickling me for a long time, but I'd like to know how this stuff is managed? Do you get to assign processes to cores manually, or does Windows schedule it automatically?
If it's done automatically, will Windows let you play about with a core while utilizing the other for compilation?

Basically, my concern is that the Windows scheduler--on a single-core CPU--sucks BADLY. If you start one CPU-intensive computation/process, everything grinds to a halt. If you write a simple application with a tight busy loop (i.e. without any Sleep's or anything), it'll take you 5 minutes to open up task manager to terminate it.
August 28, 2006 10:17 AM
ApochPiQ
Disclaimer: I don't actually know many details about the scheduler's SMP behavior, so this is extrapolation based on high-level observations.

It looks like expensive threads usually stay on a single core, but they can occasionally be swapped to the other core in certain cases. I'm not positive that the virtual CPUs have a constant relationship with the physical cores, but I do know that workload shuffles between CPUs (according to task manager, etc.)


Each process can request which CPU it runs on via the Affinity mask; you can also do this through Task Manager. There are some third-party tools available that will launch an app with a given affinity as well. Generally you have pretty good control but it's not made trivially available at a high level. That said, I've also never needed it - even with two processes pegging out the system has remained very responsive, due (I think) to the way the load sharing across cores works out. I've observed that on both the T2500 in my laptop and the Conroe, so either it's an Intel CPU thing or a Windows scheduler thing [smile]


As for practical examples, I regularly do heavy web surfing, DVD playback, and even run games while the compiler chugs in the background. Considering that on my AthlonXP 2400 I would have to basically take a nap every time I started a build, it's a heavenly improvement.


One tip if you're feeling pain on a single CPU - make good use of process priorities. You'll sacrifice a tiny bit of speed under medium load (i.e. if anything is going on besides the expensive task) but it's well worth it to be able to actually use the system should something crap out. I don't recall specifics off the top of my head, but I do know there's a way to launch a process and force it into a given priority level.

And, naturally, any NT-based kernel schedules about a trillion times better than the 9x kernel did. But if you're still running a 9x kernel you've got bigger problems than chugging processes [wink]



Rav: real life numbers show a complete clean and rebuild of a 450 KLOC project in 1 min. 24 seconds, including full optimization and link-time code generation (i.e. normally a very slow build). A full rebuild on the laptop was taking somewhere around 2-3 minutes (don't have accurate timings) and on the old Athlon could run into 6-7 minutes easily. Typical turnaround time (edit/compile/run) is down to about 35 seconds, slightly faster than on the laptop.

This isn't massively dramatic, but that's mainly because the bottleneck is hard drive access. I just couldn't justify going RAID quite yet, although I'm starting to think of it as a good upgrade vector in another year or so. A good RAID setup can probably knock 25-30% off the times listed.


The real killer is in-game performance; X2 and X3 have historically been highly CPU bound. My debug build averages around 12 FPS at 1024x768 (no AA, no AF) - whereas a debug build on my old hardware would clock < 1 FPS, guaranteed. An optimized debug build averages 38-40 FPS, up from 5-10. The retail game runs noticeably smoother, but I don't have numbers offhand. The benchmark averages just over 40 FPS, up from the 37 average I had when running on the Athlon64 3200 (same video hardware, slower CPU/RAM/HDD).


So all in all it's a significant boost. Most importantly, though, it psychologically feels a heck of a lot faster, and that makes all the difference.
August 28, 2006 04:20 PM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement