Today, I decided to take a little break from coding and explore some of the other projects in the Thousand Parsec universe.
Quite a few of the GSOC projects (the two AI projects and the server configuration project) focus on allowing TP to be played by a single player. This is very exciting and I'm already envisioning creating a coffee-break ruleset similar to strange adventures in infinite space. :)
After reading through the list, I wanted to try out some of the rulesets, particularly Risk as it has a playable version available already. Unfortunately, due to my noobishness in gcc compilation I was unable to get the git version of tpserver-cpp (where all the rulesets are stored) working. What I tried instead was to copy the risk folder over to my working tpserver 0.5 directory and compile it instead. This seemed to work at first:
The screenshot shows the client connected to a server running the risk ruleset. Unfortunately, I could not seem to get the game working properly, as it did not assign me any planets or ships to start with. This problem also existed in the tpwx client, so I suppose the ruleset requires the latest tpserver in some manner.
Slight disappointment aside, I spent the latter part of the day looking at how I could improve the frame rate of the client somehow. The base fps on the client is very low, and even when the screen is not showing anything (all objects are culled) the framerate is clipped at around 40 fps. I found the bottleneck to be the starry background code, once I commented it out the framerate became about 99 fps (still without looking at anything).
Level of detail, in which objects reduce their polygon count as the camera becomes further, was another technique I wanted to apply. Thankfully, this turned out to be easy as I simply had to run OgreMeshUpdate (a program bundled with Ogre) on the models I wanted. After trying a few options, the framerate went up to about 15-16 fps. After a while I noticed some corruption in the model skin due to polygon reduction.
What I did was to tweak the distance levels so it only reduces when really far. I guess someone who can see at the pixel level might still notice it though.
After that, I rebooted in Windows to try creating a binary using py2exe. Since the script was already written, packaging it was not difficult but what I did notice was a tremendous boost in fps when I ran the client.
Here's a "action-packed" shot of the client connected to the demo tp server. With 6x anti-aliasing and 1280x1024 resolution, it still has better fps than the linux version. Wow, windows is great! Actually, the cause of the difference is probably my linux drivers. Since ATI has abysmal driver support, I'm running the open-source drivers which are slower in 3d performance but at least they work. (ATI drivers give me graphical corruption even when browsing the net or openoffice)
Here are my relevant system specs consisting pretty much of last-gen hardware.
Processor: Can't remember the model, but windows reports it as a 1.3 ghz Athlon.
RAM: 1 gig. I thought it was overkill when I first bought it.
Video card: Radeon 9800 Pro. I'll get Nvidia next time.
Motherboard: Nvidia nForce-2. Irony?