With the software largely complete and in the process of being tested, it's time to turn attention to optimising the hardware on which it runs. By necessity, Project V is very complex in its requirements with a range of servers undertaking specific tasks. The aim was to replicate these across the Europe and the US, but we're encountering serious problems with the Savvis network in the US and have been forced to reconfigure and move servers to the UK.
The problem could be threefold: 1) the connection has been deliberately throttled; 2) a router somewhere isn't properly configured or working optimally; 3) there's poor peering between the networks we're using.
It may seem sensible to have all of your servers in the same data centre on the same connection, but this doesn't always make sense; setting up a network is often more an art than a science and I've been lucky enough to work with some great network managers down the years and have learnt tons about the dos and don'ts.
The problem with networks and servers is that you can easily spend a fortune of hardware and software - Microsoft's SQL Server is especially prohibitively expensive (resulting in many systems using MySQL instead). And once you've spent that money you'll need some professionals to look after it. So, Project V is totally outsourced. The problem is that some of the processes we're implementing aren't supportable by the hosting partners. It's a classic problem in internet management - a function falling between the gaps in the project.