Aller au contenu
PcPerf.fr

PcPerf bot

PcPerfonaute
  • Compteur de contenus

    399
  • Inscription

  • Dernière visite

Messages posté(e)s par PcPerf bot


  1. We've started dealing with all of the outage.  It looks like we should have the basics online (stats, fah-web, main AS, most of the key servers), but most of the redundant systems will be down, so there could be outages even with all we've done to try to have the basics up.  Hopefully this will be straightened out by the end of the day pacific time.

     

     

     

    Voir l'article complet


  2. We've been working to try to minimize the effect of the server room maintenance coming up tomorrow.  Since this is our main room, FAH will be stretched pretty thin during the outage and we expect there will be WU shortages.  Also, clients will not be able to send WUs back to servers that will be unavailable during the outage.

     

    However, some good news is that we have been able to get power to a few key machines so the stats and web page should be up as well as most of the key servers.  If you have trouble getting or returning WUs tomorrow, please just wait it out until we get a chance to get everything back on line.

     

     

     

    Voir l'article complet


  3. We've been working to try to minimize the effect of the server room maintenance coming up tomorrow.  Since this is our main room, FAH will be stretched pretty thin during the outage and we expect there will be WU shortages.  Also, clients will not be able to send WUs back to servers that will be unavailable during the outage.

     

     

    However, some good news is that we have been able to get power to a few key machines so the stats and web page should be up as well as most of the key servers.  If you have trouble getting or returning WUs tomorrow, please just wait it out until we get a chance to get everything back on line.

     

     

     

    Voir l'article complet


  4. During Stanford's Winter Closure (December 18 through January 2), IT Services plans to schedule a network backbone maintenance window every morning from 4:00 a.m.- 8:00 a.m. to implement improvements in the network, as we did last year.  In most cases, the changes should not affect the connectivity of departmental or home networks.  In cases where they might, any interruption in service should be under 5 minutes.

     

    While Folding@home will be up for this period, there may be brief (~5 min) network interruptions for network traffic from/to off-campus during the daily maintenance windows of 4:00 to 8:00 am pacific time.  The upshot is that the campus should get improved network performance after the upgrades.

     

     

     

    Voir l'article complet


  5. During Stanford's Winter Closure (December 18 through January 2), IT Services plans to schedule a network backbone maintenance window every morning from 4:00 a.m.- 8:00 a.m. to implement improvements in the network, as we did last year.  In most cases, the changes should not affect the connectivity of departmental or home networks.  In cases where they might, any interruption in service should be under 5 minutes.

     

    While Folding@home will be up for this period, there may be brief (~5 min) network interruptions for network traffic from/to off-campus during the daily maintenance windows of 4:00 to 8:00 am pacific time.  The upshot is that the campus should get improved network performance after the upgrades.

     

     

     

    Voir l'article complet


  6. One of our main server rooms will be undergoing maintenance on December 16, 2010.  This will mean some of the FAH servers will be off line on that day.  It looks like most of the key FAH infrastructure will be up, but there will likely be WU shortages on that day since a large fraction of machines will be down.

     

    We are working to optimize what we can by distributing jobs to servers in other server rooms, but we wanted to give donors a heads up in advance so they know this is coming.

     

     

     

    Voir l'article complet


  7. One of our main server rooms will be undergoing maintenance on December 16, 2010.  This will mean some of the FAH servers will be off line on that day.  It looks like most of the key FAH infrastructure will be up, but there will likely be WU shortages on that day since a large fraction of machines will be down.

     

     

    We are working to optimize what we can by distributing jobs to servers in other server rooms, but we wanted to give donors a heads up in advance so they know this is coming.

     

     

     

    Voir l'article complet


  8. We have set the flat file update to be every hour at the 20 minute mark in order to better match our web site stats to the flat files, i.e. 

     

    The normal stats usually finish updating by the 10 minute mark, so the 20 minute mark should be safe to keep the two reasonably in sync.

     

    Please keep in mind that the flat files should be accessed by scripts no more than 24 times a day and that the web site urls in the cgi-bin directory should not be accessed by automatic scripts (please see our robots.txt for details).

     

     

     

    Voir l'article complet


  9. We have set the flat file update to be every hour at the 20 minute mark in order to better match our web site stats to the flat files, i.e. 

     

     

    The normal stats usually finish updating by the 10 minute mark, so the 20 minute mark should be safe to keep the two reasonably in sync.

     

     

    Please keep in mind that the flat files should be accessed by scripts no more than 24 times a day and that the web site urls in the cgi-bin directory should not be accessed by automatic scripts (please see our robots.txt for details).

     

     


     

    Voir l'article complet


  10. We have been tracking down a bug in our stats by operating system page and it looks like we have now found it.  Basically, we were counting SMP clients as giving 1 CPU.  This grossly undercounted the total number of CPUs, especially for Linux and OSX.  Please note that we have only updated this particular page (stats by operating system) and are looking into updating other ones with the additional information of the number of CPUs per client.

     

     

    There is a remaining known issue with under-reporting the number of  NVIDIA clients.  We are working on this.

     

     

    For those who are curious, here's the latest as of 5 min ago:

     

     

     

     

     

     

     

     

    Client statistics by OS

     

     

     

     

     

     

     

     

     

     

     

    OS Type

    Native TFLOPS*

    x86 TFLOPS*

    Active CPUs

    Total CPUs

     

     

    Windows

    303

    303

    291382

    3508526

     

     

    Mac OS X/PowerPC

    4

    4

    4505

    141460

     

     

    Mac OS X/Intel

    101

    101

    24526

    134420

     

     

    Linux

    295

    295

    109289

    539508

     

     

    ATI GPU

    896

    945

    6307

    139443

     

     

    NVIDIA GPU

    329

    694

    2068

    215909

     

     

    PLAYSTATION®3

    800

    1688

    28360

    1034788

     

     

    Total

    2728

    4030

    449343

    5714054

     

     

     

     

     

     

     

     

     

     

     

    Total number of non-Anonymous donators = 1478538

     

     

    Last updated at Mon, 08 Nov 2010 15:27:38

     

     

    DB date 2010-11-08 16:38:48

     

     

    Active CPUS are defined as those which have returned WUs within 50 days. Active GPUs are defined as those which have returned WUs within 10 days (due to the shorter deadlines on GPU WUs). Active PS3's are defined as those which have returned WUs within 15 days.

     

     

    *TFLOPS is the actual teraflops from the software cores, not the peak values from CPU/GPU/PS3 specs. Please see our main FAQFLOPS FAQPS3 FAQ,NVIDIA GPU FAQ, or ATI GPU FAQ for more details on specific platforms.

     

     

     

     

     

     

     

     

    Voir l'article complet


  11. Donors are often curious to hear about recent results.  While one can read our papers, listed on our web site, those are fairly technical and intended for a biological or biophysical audience.  The talk I gave at GTC 2010 ("Folding@home:  Petaflops on the cheap today, exaflops soon?") was intended for a computational audience, so it may be more approachable than the papers, especially for those more familiar with the computational side of FAH, rather than the biological.  You can see it here.

     

     

    I talk a bit about how FAH works and some recent results in protein folding (pushing past the millisecond timescale), protein misfolding disease (Alzheimer's) and viral infection.  If you're curious about how FAH works or what we've done recently, this might be of interest to you.

     

     

     

    Voir l'article complet


  12. It’s with great pleasure that I announce that today is Folding@home’s official tenth anniversary.  It’s been an amazing 10 years, especially in terms of what we’ve collectively been able to do, and my team and I are grateful for all the contributions by millions of people that has made this possible.  If you’re curious to see what we’ve done so far, please check out our Results section or Diseases FAQ.  In particular, we're very excited about recent work on protein folding, which could radically reshape how people think about folding (leading to recent awards).

     

     

    Behind the scenes, we’ve been planning some 10 year celebration activities, including a new client, better client software feedback of what’s going on, some new client surprises through new collaborations, new backend software, and enhanced science via new cores.   We’re also pushing to support more hardware, such as new support for OpenCL on ATI hardware (an ATI OpenMM/OpenCL core16 is in internal testing, although it requires the v7 client).

     

     

    One key big picture goal for this year is our push to try to make FAH much easier, interesting, and fun to use by donors.  With a new server backend soon to be in place, we should be ready to scale to much higher levels and we’re excited about what we can do with the combination of a new client that’s easier to run, much more stable backend, and new science in cores A3, A4, 15, and 16. 

     

     

    Finally, amongst some of the surprises are new initiatives that I hope will change how people think about distributed computing.  That’s clearly a lot to hope for, but that’s our goal.  Sorry for being so coy about this now, but I wanted to let people know there’s a lot going on behind the scenes and we’ll be talking about this more as we announce these initiatives throughout the year.

     

     

     

    Voir l'article complet


  13. We have two papers (one that just came out in PNAS and one that's about to come out in Physical Review Letters -- papers #74 and #75 on our papers list) that we're particularly excited about.  They represent some key results that we've learned by examining multiple results from Folding@home.  The resulting picture of how proteins fold is fairly different from the prevailing view, so it will be interesting to see what experiments tell us about the specific predictions made within.  We're excited to see where this goes!

     

     

     

    Voir l'article complet


  14. One of our concerns at Folding@home is the reliability of returned results - when you run on hundreds of thousands of machines around the world, in diverse environments, it's virtually guaranteed that some machines will be faulty. We've long advocated the use of reliability-verification tools to make sure your machine is working properly, especially for users who overclock their machines. While good utilities are available for this task on CPUs and system RAM (e.g., StressCPU and Memtest86), few tools are available for these tasks on GPUs because of their relative novelty. 

     

     

     

    Last year, we released the MemtestG80 GPU memory checker, an analog to Memtest86 for NVIDIA CUDA-enabled GPUs. This has been widely used by the community to catch misbehaving video cards. To bring this testing capability to a wider audience, we've just released a new, OpenCL-based GPU memory tester named MemtestCL. Because it's based on OpenCL, users of ATI video cards (Radeon 4000 series and newer) are now able to validate their GPU memory as well as users of Nvidia video cards. Both MemtestG80 (CUDA) and MemtestCL (OpenCL) implement the test patterns from Memtest86 (as well as a couple custom patterns) to make sure your GPU memory is working correctly. 

     

     

     

    If you run FAH heavily on a GPU, especially if your card is overclocked, it's a good idea to check your GPU memory, in the same way that you'd run Memtest on CPU memory. Both programs are available from the FAH utilities download page (http://folding.stanford.edu/English/DownloadUtils); people interested in looking at the source code can find LGPL-licensed copies on the project's SimTK homepage: https://simtk.org/home/memtest.

     

     

     

    Voir l'article complet


  15. We are a bit tight on SMP servers at the moment, as we are waiting for a new big server to come back on line.  Our admins have been working on this problem and think they have found the issue with the raid on that machine (after consulting the hardware manufacturer, they're upgrading the firmware of the drives).  We don't have an ETA on this, but I'm hoping it will be relatively soon.

     

    After the new machine comes on line, we should have a lot more SMP power.  Moreover, we have other servers waiting in the wings and our plan is to bring several servers on line for SMP in the coming weeks with new WUs and SMP projects (although note that it takes a while for new projects to go through our QA process).

     

    All of this is in anticipation of the v7 client maturing and going into open beta in a few months.  V7 should make it a lot easier to run SMP.  However, it's also important to note that the v6.30 client (already released and on our high performance client download page) already makes SMP pretty easy to run.

     

     

     

    Voir l'article complet


  16. I've talked about our work with our second generation SMP client (dubbed SMP2).  The goals were to make it work much more simply and easier for donors, namely by getting rid of the need for MPI libraries.  This is no small feat and the bulk of the credit goes to the Gromacs developers who pushed on this.

     

    We have a SMP2 preview client (v6.30) on our High Performance Client Download Page (scroll to the bottom of the page) as well as updated SMP guide and FAQ.  We're still calling this a beta client while we test SMP2.  However, this technology is already part of our new v7 client and our plan is for SMP2 to be a standard part of FAH when the v7 client rolls out of beta testing.  On a side note, the v7 is currently doing well in QA and I hope we'll have an open beta test of it soon, eg in the next few months if not sooner.

     

     

     

    Voir l'article complet

×
×
  • Créer...