I have a corner desk in our living room with few old computers, and one of them is a Fujitsu-Siemens Esprimo P. This Pentium 4 CPU equipped cream coloured mini-tower computer is an ordinary computer from late 2000's. An exact model and year of release are not in my knowledge at the moment; it looks outside like Esprimo P5905, which also used the same D2151-A1 motherboard. The 5905 was at least originally shipped with Windows XP though, whereas this unit came with Vista, which was not even released yet in 2005 when this review (https://www.alphr.com/desktop-pcs/28016/fujitsu-siemens-esprimo-p5905-review) was written. Probably at least essentially the same computer nevertheless, just presumably sold couple years after first release as a low-price model. As a side note I actually bought a Fujitsu-Siemens computer in 2005 as my first own genuinely modern computer, and I was quite happy with it, but that was different model and overall it is another story.
The computer barely fits into retro category, but I'm also not using it for retrocomputing purposes; this is my new home server. Or at least it is intended to be one. So far I have installed some new hardware on it (960 GB SSD drive and two 4 TB hard drives for storage purposes through a PCIe SATA-II card), but most importantly I have installed new operating system on it: openSUSE Leap 15.1 Linux.
A fake server hiding under family art. |
OpenSUSE Leap 15.1 is stated to require as minimum system requirements 64-bit CPU, 1 GB of RAM (2 GB recommended) and 5+ GB disk space (https://opensuse-guide.org/installation.php). My old F-S computer's specs are easily above minimum: the processor is 64-bit version and there is whopping 3 GB RAM on it. It should be possible no problem to get even 64-bit Windows 10 installed on this, but I just wouldn't want to actually using that combination. However, installation of a modern OS did not go through by default settings. I made my boot DVD and the computer booted with it no problem (the installation even had a fancy winter/christmas theme if I recall correctly for the first try, as the computer's Real-Time Clock battery was missing and hence the time was well off), but when I tried to start installing, the computer worked on quite a while until eventually process counter froze at 100% and nothing ever happened.
Shots into troubles
In a typical modern installation guide like in the homepage linked above there is no troubleshooting, and the mostly self-explanatory installation process just goes through like a dance when you're the prom queen. Of course most people wouldn't go install new operating systems to old computers anyway, so the inevitable issues would be met by only us few...in any case I had to figure out what went wrong.
The thing was with BOOT menu settings for Kernel Default, which had to be changed first by pressing F5 on bootup/install menu with the DVD's GRUB startup. This computer was old enough to either not support APIC (Advanced Programmable Interrupt Controllers) at all or at least not the version expected, so I had to turn it off for installation. Later on I had to edit my /etc/default/grub file probably due this, otherwise the computer would not shut down by software commands, but after that the ATX software power control seemed to cause no issues.
Anyway, after this setting change I was able to install the new Linux distro. I've never had openSUSE, so it would be a new learning experience after having mostly used Debian based distros. Especially as I opted to not use GUI at all - there wouldn't be excess amount of computer resources to waste, and for most of the time I was not intended to be exactly sitting next to a monitor with this computer anyway. SSH would be the way of control for this partner in computing, straight from my main workstation the next door. I'll just need to install SSH server (sshd, actually SSH daemon) and set my firewall settings properly, including swapping the default port 22 to something say 5 digit numer for additional security. First things with new Linux distro comes up: to install stuff I can't use apt-get commands but zypper, and the default firewall application is not UFW but runs just by firewall-cmd.
This being said, about not having GUI, I do recommend installing fish shell for making life with CLI a bit easier - even if using terminal just occasionally. Fish shell gives a bit Powershell-like option to browse available commands by TAB and does other usability assistance features such as adding colours to terminal text a bit similarly as programming text editors do (ie. IDE applications). Default Bash might be powerful for sure, if you know how to use it, but that is essentially just an enchanced version of 1970's technology. Barebone Unix shell can be really daunting to use if you have no long experience, no tutor to support you, or no proper thick manual within reach.
Yea, you can google around every command separately if you have another computer around (or use remote access), but googling is just a terrible mess in comparison to having any of those three. Googling gets stuff done, but it's really like going to nearby woods to fetch some berries or mushrooms; you might know how to find them, there might be fresh stuff around to pick, and then it's just a matter of time, but it sometimes takes a lot of time to find what is needed and usually there is need to search for multiple places at once. Not to mention that lots of data in Internet is already well rotten with their due date already ten years behind or it's worm eaten to begin with, and it can be hard to see from afar which one is a good pick - at least with books it's possible to judge by the cover (or front pages) when it was originally released and that books in theory are more prone to have more certified facts. That of course is not always true either, but now I'm getting sidetracked.
Server is up and sitting
So now I can access my fancy home server from another computer in my local home network (actually I set it up to be accessable also from outer web, but that needs to get more strict security rules on how who and where). What then? What to do with a home server? Many people don't really seem to have a clue what to do with a server, and I suppose it is understandable, especially as I can't do just whatever with this. Nevertheless, let me think of few options.
1. File server. I can set a private file sharing server that can also provide network drives for all local devices we have at home. Alternatively it could be used for transferring data to other people. Sure there are all kind of free cloud services available, but if you care about privacy the free ones become a bit less tempting. On the other hand if you need space for hunderds of gigabytes of data or even terabytes (video material in my case hogs probably like 80% of my used terabytes), the free services are just not sufficient and paid services might actually become pricey with higher storage uses.
2. Web page server. Well I'd lack DNS support, so people would need some fancy http://111.111.111.111/sakariaania style address to access it, but I might be able to use it myself for something like own data, easy checkup whether the server is still up, and could provide direct links for people who could need it. Would always amuse me to have own www server at home.
3. Surveillance camera. "Why would you need that?!?" Yea, I don't have nothing to steal and I'm always at home anyway, right? Any home could be broken into even if there is nothing really valuable inside, and getting photos of the intruders and time of the event might help a lot on figuring out the possible crime. Like risk of fire, such things are unlikely, but it's still best to have some preparation for this unfortunate thing to happen. On more practical side though, I could check if there is any mail arrived in case I'm actually not at home.
4. Mail server. Okay, for this too to work properly I'd need some extra services set up, and I might be quite content with having some other email addresses...but at least it would make it possible for my server to send email notifications if some service fails - or even to build up some kind of MFA system.
5. Remote control. Technically if there is anything I can control by a computer and I wish to have remote access for controlling that, a server with just SSH access could become a tool for that.
There'd be of course various other possibilities, but those would hit as something realistic and usable.
In addition to this all, for me it is also largely just about testing and learning. If I'd really need a server, I could more easily set up one, and also I may encounter things that can help me understand things I encounter at work. Hands-on method teaches a lot more to me than being given server IP and credentials for remote logging some server so I can do some Active Directory management for instance. That doesn't really tell me 'what' is the server for real. By setting one up I'll get a bit better impression - even if no enterprise environment would rely on setting up a server on 10+ years old home computer, especially as nowadays hardware servers are less and less frequently set up when you can just have virtual servers.
RAID over Sakariaania
For now I'm only going to go for the file server option, as that I actually need. Since I bought two identical 4 TB hard drives for this specific purpose, I also felt like testing RAID 1 state with them. For a moment I thought my PCIe SATA card with RAID controller would actually work as a hardware RAID, but then I found out that these are deemed as "fake raid" devices. Not only that, but especially on Linux it is recommended to use software RAID setup instead both because the "fake raid" could cause technical issues (including reduced reliability: should another drive or the computer itself fail, the other drive might not be recoverable on another platform) and because Linux's own software RAID application is supposedly very good and robust. It just might, maybe, take slightly more resources than the hardware controller supported "fake raid". Is my old Esprimo up for the task?
SATA-II PCI-Express Card still unpacked. |
First shock came after installing the card and drivers. The card itself was supposed to have its own BIOS for automatically setting up a RAID that could be started by pressing a function key during computer self-test. I guess the key was correct, since pressing it froze the whole system. When I merely started the computer I could not find the drives at all. Oh dear, did not the card work on this computer at all? Silly me, however, I forgot that the new HDDs were completely unprepared. I went on my other computer, created partition tables and formatted the drives. After that I could find the drives on my Fujitsu-Siemens as devices but not from the file system. For a little moment I also had forgotten that on Linux like this the drives would not be automatically mounted either...such corruptive effects all those modern convenience features can cause. So after partitioning, formatting and mounting the drives could be found on the server computer - as their separate entities.
The Linux application 'mdadm' is an application with which you can set up a software raid on whichever partitions on whichever drives available, so that is the recommended option for setting up a software RAID. I can already tell that it worked no problem and wasn't too hard: just needed couple commands to assemble the RAID and then to enable it. As another option I'd had potentially 'dmraid' application with which it should have been possible to work with the PCIe card itself, but I ended up not even trying that for real. Like referred to in the previous paragraph, I was never able to reach this card BIOS on this computer, which was a bit of a bummer. Fortunately, the disappointment faded away gradually after I first found out that this 'dmraid' could have set up the necessary things, and even more after I found out this that it's not likely a good setup anyway.
If I'll now check on my server terminal for instance disk free status on all drives by command 'df', I'll have this RAID disk called /dev/md0. After a moment I can mount it as a network drive on my main computer. Mission successful? Not quite. This is just a start. On the other hand, because this is also a test, I will yet change my setup with drives a bit.
First test of copying was not perfectly impressive though. The computer is rather silent normally, but when I for the first time tested copying few GB worth of data over the SSH file transfer protocl just for testing functionality, CPU hit 100% and computer fan jumped from like 1100 to 2600+ RPM and that made it rather loud. It was night time too...oops. Copying locally from drive to drive the CPU hogs "only" around 80% and doesn't make much sound. Normal idling temperature of the fan is like 40-50 C, yet this 100% CPU transfer situation took the temperature close to 80 C - but barely further. So temperature was not that bad, the computer made it ok and main issue was just the fan. I might consider either swapping the fan or adding another silent one to make cooling go better at some point.
Nevertheless, this test made me reserved with the RAID setting. Was the software RAID too heavy after all or was it just the SSHFS? On my subsequent file transfers there was no similar heavy fan load for whatever reason, and when I checked the process resource drainages, actually it was not mdadm which took resources but sshfs. Also it took lots of resources to transfer to the non-RAID drive as well, so I suppose the software RAID is not really a problem. It just is heavy process for that computer to transfer the files over the connection. Although that is somewhat difficult to say, since the md0 synch of RAID drive seems to cause multiple processes that can take few % CPU even when nothing is really done, so that might stack up on unnecessary drains on an old computer like this. I'll probably eventually remove the RAID after testing - it's not really needed after all, but I had never tried it. Instead I can just use scheduled backups every now and then.
Linux command line has also this convenient command 'time' to check up duration of any command. Therefore I could test transfer speed. So by hitting the command "time cp [/source/path] [/target/path/through/sshfs]" I got following results on two different tests:
From local HDD to server RAID HDD.
real 5m55,582s
user 0m0,212s
sys 0m25,496s
From local HDD to server SSD.
real 5m31,433s
user 0m0,238s
sys 0m25,763s
My test package was 15.9 GB worth of video files. Transferring to SSD was a bit faster than to the HDD RAID as was expected, although I'm also sure the SSD can't use its full potential on that computer. Also I'm actually thinking the motherboard SATA is only SATA-I whereas the PCIe card should at least in theory provide SATA-II connectors. Although I guess 45-50 MB/s might be fairly decent transfer speed for this kind of a setup. I'll have yet some more tests to go with for further conclusions. Also I might yet test if I can actually get the RAID set up by 'dmraid' command set. We shall see when the next part comes.
Can't deny though - this kind of a warrior pair is more or less asking to join a RAID. |