By Anthere
Board meetings
See also this recent list of open questions to the board.
Juriwiki-l
Defamation, Libel...
Trademarks...
Copyright Violations...
WNL...
- note : un article annonce le changement de license de wikinews ici : [1]
Policy...
|
By Maveric
...
And on the hardware side
edit
By Domas Mituzas
Already in March it was clear that we needed more hardware to solve our main performance bottlenecks, but there was lots of hesitation on what to buy. This somewhat ended in mid-April, when we ordered 20 new application server (Apache) boxes, which were deployed in May. Then again, our main performance bottleneck happened to be our database environment, which was resolved by ordering and deploying two shiny new dual-Opteron boxes with 16GB of RAM each, accompanied by an external Just a Bunch of Disks (JBOD) enclosure. In this configuration we eliminated previous bottlenecks, as disk performance and in-memory caches were critical points. These two boxes have already shown to be capable of handling 5000 queries per second each without any sweating and were of great aid during content rebuilds during the MediaWiki 1.5 upgrade (we could run live site without any significant performance issues).
Lots of burden was removed from databases by using some more efficient code, disabling really slow functions and, notably, deployment of the new Lucene search. Lucene can run on cheap Apache boxes instead of our jumbo (well, not that really in enterprise scale) DBs, therefore we could scale up quite a lot since December with the same old, poor boxes. Archives (article history) were also placed on cheap Apache boxes thus freeing expensive space on the database servers. Image server overloads were temporarily resolved by distributing content to several servers, but a more modern content storage system is surely required and planned.
There were several downtimes related to Colocation facility power and network issues, of which the longest one was during our move (on wheels!) to a new facility, where we have more light, power, space and fresh air. Anyway, acute withdrawals were cured by working Wikis.
There was some impressive development outside Florida as well. A new datacenter in Amsterdam, generously supplied by Kennisnet, provided us with a capability to cache content for whole Europe and neighboring regions. Moreover, it enabled us to build distributed DNS infrastructure, and preparations are made to serve static content from there in case of emergencies. Various other distribution schemes are researched as well.
Currently there are preparations made to deploy our content in a Korea datacenter provided by Yahoo. There we sure will use our established caching technology, but we might already take one step further and put our master content servers for regional languages there. As well, further expansion of our existing Florida content-processing facility is thought about.
As always, keep up information thanks to the following links
But... down times...still happen...
edit
Wikimedia site down on May 13
On May's Friday the 13th power supply troubles knocked out the majority of Wikimedia servers. Because of this, the main network switch, core file server and nearly all web and cache servers failed to function. As all of database servers had dual power supplies, they were the only ones to survive the crash. Proper site recovery took more than one hour, with some resulting side effects (some hardware needs replacement).
Statistics on the projects
edit
...historical metrics
|