=== One is glad to be of service === I've been taking care of boxes since my first job as an EmployedWorker and in the '90s I've been providing free telecommunication service to a total of 450 registered users with my own BullettinBoardSystem called [[SystemShockBBS]]. I've been using a wide variety of different OSes, see SystemArchitectRole for a list. === Production systems === I've designed and helped implementing the following production systems, all of which are still in service: * **SugarLabs's Internet infrastructure** -- I'm coordinator of the [[http://sugarlabs.org/go/InfrastructureTeam | Infrastructure Team]], taking care of 8 machines hosted at various colocation facilities, running development and support services for a large community. * **[[http://fsf.org/ | Free Software Foundation]]'s virtualization infrastructure ** -- I've been the fall system administrator intern in 2009. My main job was to set up and evaluate a **high-availability virtualization cluster** with 2 nodes based on [[http://www.xen.org/ | XEN]] and [[http://www.drbd.org/ | DRBD]]. * **DevelerCompany's IT infrastructure** - For over 6 years I've been continuously growing and restructuring the network, the servers and their intricate mesh of services and support scripts. The main server, called ##trinity##, contains 1TB of storage and offers file storage, user authentication, e-mail, and many web-based services to a highly heterogeneous and complex environment comprising several versions of Linux, MacOsX and Windows clients. Additional servers act as VoIP PBX and secondary slave for many (but not all) of ##trinity##'s services. * ** [[http://www.fieremostre.it/ | Fieremostre (Milan)]] ** - A cluster of 4 RedHat servers, two web front-ends and Java appservers and two database and filesystem back-ends with SCSI RAID5. Fully managed remotely, including power fencing and robotized tape juggler. * In 2003, I configured and installed a cluster of 5 nodes on blade CPU boards with **shared fiber-channel RAID storage**. I used a pre-release of **RedHat Advanced Server** to implement a shared storage pool of 1.5TB with GFS1. The system initially went to production without the GFS pool because of reliability concerns with this new technology. * **Genexpress Lab** - A combo of two interconnected servers, each acting as a gateway and file server for a security ring. This design was done for the Department of Bioengineering of the University of Firenze, in Prato's Scientific Center. My philosophy is using mainstream hardware whenever possible and concentrate most services on few physical systems. The savings in cost can be used to increase availability and performance through redundancy. This strategy leads to data-centers that are easy to understand and maintain, while at the same time scaling up much better than traditional asymmetrical solutions (like web server, mail server, db server...). Being particularly fond of the UNIX culture, I tend to keep my systems as open as possible while at the same time very secure. TODO: add a list of server software I use/know