Aller au contenu
PcPerf.fr

PcPerf bot

PcPerfonaute
  • Compteur de contenus

    399
  • Inscription

  • Dernière visite

Tout ce qui a été posté par PcPerf bot

  1. Simulating protein folding on the millisecond timescale has been a major challenge for many years. When we started Folding@home, our first goal was to break the microsecond barrier. This barrier is 1000x fold harder and represents a major step forward in molecular simulation. Specifically, in a recent paper (http://pubs.acs.org/doi/abs/10.1021/ja9090353), Folding@home researchers Vincent Voelz, Greg Bowman, Kyle Beauchamp, and Vijay Pande have broken this barrier. The movie below is of one of the trajectories that folded (i.e. started unfolded and ended up in the folded state). From simulations like these, we have found some new surprises in how proteins fold. Please see the paper (url above) for more details. Why is this important? This is important since protein misfolding occurs on long timescales and this first simulation on the millisecond simulation for protein folding means we have demonstrated our new Markov State Model (MSM) technology can successfully simulate very long timescales. It make sense to go after protein folding first, since there is a wealth of experimental data for us to test our simulations. While this paper on protein folding has just come out, we have already been using this MSM technology to study protein misfolding in Alzheimer's Disease, following up from our 2008 paper. While our previous paper was able to get to long enough timescales to see small molecular weight oligomers, this new methodology gives us hope to push further with our simulations of Alzheimer's, making more direct connections to larger, more complex Abeta oligomers than we were previously able to do. Voir l'article complet
  2. Just an update on my blog post. I want to make it clear that we're seeing problems in both of our ATI and NVIDIA GPU OpenCL implementations for OpenMM. Maybe it's on our side (I'm bet some bugs are on our side), but we suspect there could possibly be some issues with both the NVIDIA and ATI OpenCL implementations (and maybe even Apple's). We are working with everyone closely to either fix our bugs or get our code compliant with the OpenCL implementations.Also, it looks like I should have been more careful in my wording of my previous post. I was referring to the issues above when talking about whether the ATI implementation was "fully functional" and some groups have gotten this out of my intended context. As far as I understand, the ATI OpenCL implementation is fully functional. Finally, it is important to note that our Brook code is still supported, just not actively developed so Folding@Home/OpenMM is concentrating on shifting to OpenCL, so no more active development using Brook. Voir l'article complet
  3. Here is an update and some more details on the third generation GPU core. As I stated before, this core is built on OpenMM, which brings the science further along and allows us to make easier updates. OpenMM supports OpenCL in beta form in its 1.0 release (scheduled for late January 2010), but it is important to stress that the OpenCL support is very early and so we will not be relying on OpenCL in the first releases of the GPU3 core. This means that the core will only roll out for NVIDIA first. ATI has depreciated Brook, but does not have a fully-functioning OpenCL implementation, so we are stuck in between support on the ATI side. Once the OpenCL implementation matures (on both ATI and NVIDIA), we will be able to finish and optimize our OpenCL code (we can't reliably optimize code until the implementations are more finalized), and then the OpenCL portions will go into QA. So, if you are interested in OpenCL (especially in terms of ATI support), this will not be in the initial release, but once the environment has matured, we will push forward. By the way, once OpenCL support for multi-core boxes has matured, we will also see about porting the GPU3 code as a new SMP2 style core as well (i.e. threads-based SMP support). If the performance is strong, it's appealing to think that we can go back to having a single dominant code base for much of FAH's calculations. Voir l'article complet
  4. We have been going through various shortages for Power PC OSX WUs, due to the new cores not supporting PPC machines. While we expect that we may have new PPC WUs in the future, we want to give a heads up that we are depreciating PPC OSX support in Folding@home. We would love to support as many OS's as possible, but the overall compute power from the PPC group is small enough, and the efforts to support it large enough, that we have decided that it is better for us to put our resources into improving support for the other OS's we are supporting. Thanks to all of you who have faithfully supported FAH with the processing that you have donated! Voir l'article complet
  5. Happy New Year! We're starting out the new year with a planned auto-core upgrade for GPU core 11. This is planned to happen Monday morning Stanford time. This will bring the GPU core to version 1.31, which has many improvements. In particular, this core is very helpful with the new P10101 WUs. If you'd like to upgrade sooner, you can always do a manual core upgrade at any time by deleting your GPU core (or perhaps safer -- renaming the file so you can revert back if you want). However, such manual core upgrades are just for FAH experts. The FAH client software will do all of this for you if you like, by the auto-upgrade mechanism built-in on Monday. It's also worth mentioning that we have made good progress with the GPU3 core and that should be rolling out of beta to production early in 2010. Voir l'article complet
  6. In preparation for the migration to the new stats db, we have been doing some profiling on the stats db to see where we can get the most speed up. We have found a few items that we're working on to increase speed. The hardware improvement will make the biggest advantage in general, but now is a good time for us to try to get the stats system lean and efficient, since profiling afterwards will be harder (and any efficiencies we can squeeze out will help years from now when the new hardware is underpowered for the new stats load). So, from the donor perspective, you will at first see the results of some experimenting on our side to improve the throughput (i.e. may be some bugs too), but hopefully there will be some throughput benefits even in the short term. Finally, we have noticed that there are many web clients pounding our stats system, some doing 10,000 accesses per day and many doing 1,000. These IPs will be banned, so please turn off any scripts you have doing automatic access. "Normal" access is assumed to be up to 50 accesses a day. There's no need to do 1,000 or more and it really bogs down the stats for everyone else. If you're looking to get access to our stats system, please check out our FAQ entry on this issue. UPDATE: We've turned off certain stats page functions dealing with queries relating to the number of CPUs. We have also stepped up the caching of team pages (all team pages are cached for 60 minutes). This will make a major performance win in the stats db and so far is looking to help a lot. This change is temporary since the cause is clear: our db keeping track of items like specific cpus is way too big, due to the millions of CPUs that have participated in FAH. This won't be a problem with the new servers, so we will just turn on these features after we migrate. For now, turning off this parts of the stats pages will help a lot until we switch over. Voir l'article complet
  7. A machine went down last night, which brought down vspg1, vspg1v, and vspg1v2 (these are all different virtual interfaces on the same physical server). Our admins (working during the holidays -- thanks to them for their hard work) reset the machine this morning and we got the machines going again. While vspg1* were down, this lead to a much greater load on other machines, causing issues for vspg2v2, vspg4v2, and vsp07v. These reset themselves some time around 7:30am pacific time (with the exception of vsp07v, which I reset). So, it looks like the server farm is settling down. I can't wait until we get the new, really powerful machines loaded up with jobs to help take over some of this load. Some of the jobs being tested right now are on those machines, so that should be very soon. Note that during the holidays (especially the next week or so), we have a limited staff in the office, but those that are working are diligently looking for problems. Happy Holidays! Voir l'article complet
  8. As announced some time ago we have been working on a new core (Protomol core B4), and it has been looking good in QA so we have started to release Promol core WUs more broadly. We have a preliminary Protomol FAQ for those who are curious to get more information. This new core implements the NML (Normal Mode Langevin) method which accelerates the long-time dynamics of the proteins by a factor that can reach speeds up to a hundred times faster than those of molecular dynamics. This method searches for low frequency directives by using normal mode analysis and projects the motion of the molecule along them while resolving the nearly instantaneous motion. If you want to learn more about this method, you should read this pre-publication: Multiscale Dynamics of Macromolecules Using Normal Mode Langevin. Based on Protomol 3.1, this core and its associated projects have the following goals: <ul class="bb_ul"><li class="bb_li">To validate NML by simulating the folding and dynamics of the Fip35 WW domain. <li class="bb_li">To understand the role of mutations on folding. <li class="bb_li">To understand the activation of src Kinase, an enzyme that is involved in the onset of some kinds of cancer. On the technical side, this core is able to take advantage of most modern CPU optimizations (SSE2, SSE3, SSSE3, SSE4.1 and SSE4.2), however, a few compatibility issues are still present on AMD processors, resulting in the core only using SSE2 on these chips. This should change in the not-too-distant future when the issues have been worked out. If you have a processor that doesn’t have the above mentioned optimizations (Pentium 3, Athlon XP, etc.), please report the behavior and the performance of this core on your machine. For more information about the Protomol core, you should visit the Protomol official site. The new projects are distributed by a new server (129.74.85.48) which is located at the University of Notre Dame (Indiana) and have the following characteristics: <ul class="bb_ul"><li class="bb_li">p10000 : 544 atoms, 84.48 points, preferred deadline 3.07 days, final deadline 23.04 days. This project uses conventional simulation methods. <li class="bb_li">p10001 : 544 atoms, 50.56 points, preferred deadline 1.84 days, final deadline 13.79 days. This project uses the NML simulation method. We will be posting more information as time goes on. I'm very excited about the new capabilities here, since NML allows us to algorithmically get an amazing speed up, i.e. without any additional CPU power. That algorithmic speed up multiplied by the vast power of FAH could mean very significant advances shortly, making 2010 an exciting year for FAH (in many ways)! Voir l'article complet
  9. I'm at NIH today for the IMAG meeting on the Impact of Modeling on Biomedical Research. IMAG stands for the Interagency Modeling and Analysis Group and it constitutes all of the science major funding agencies in the US (NIH, NSF, DOE, and more). It's a great idea to get these agencies together and well-coordinated in terms of their approach to computer simulation and modeling. In the meeting, they've broken down all of modeling into 5 scales, and I'm chairing the atomic and molecule scale. Our charge is to talk about the future of modeling and I'll be highlighting some of the key new results from Folding@home. For more information, please check out: http://www.imagwiki.org/mediawiki/index.ph...FM_Announcement Voir l'article complet
  10. We're seeing some issues with the main http://folding.stanford.edu web page. We have some ideas, but it's late here (11pm pacific time), so we'll have to wait for IT to get in to work in the morning to get a fix implemented. For now, if you see "Access Denied", please try to reload the page: that should yield a working page. Voir l'article complet
  11. We are excited to announce progress in our testing of new SMP2 cores. These cores use threads-based parallelization instead of MPI and, we hope, should be much more robust than our first-generation SMP cores. The SMP2 cores are currently in testing; watch for future announcements regarding their release. The first SMP2 core to be released will be a Gromacs-based core with core number A3. In conjunction with this release, we are implementing some updates to our points benchmarking system. In particular, we will use early-completion bonuses, which we have been testing with the bigadv work unit program, more extensively for SMP2 cores. Voir l'article complet
  12. We're running a stats recredit from Nov 12. Sorry for the delay in getting this going. Voir l'article complet
  13. There was a problem with the stats server over the night. We got to it first thing in the morning (around 6:30am pacific time), but it looks like a few stats updates were missed. The missed data is backed up on another machine and we can re-enter it. However, we typically take a couple of days to re-enter it, to make sure it is done right. So, the bottom line is that the stats system is back up, will we recredit some missing WUs not credited over the night, but this will take a couple of days to complete. Sorry for the delay on the recredit, but it's good to make sure we do it right, rather than in a rush. Voir l'article complet
  14. There has been a lot of work on updating the Protomol core to bring it in line with the other cores in Folding@home. I'm happy to say that a lot of progress has been made and it's looking much better. Joe will continue to test it, but it looks close to be moving to the next levels of QA. The GPU3 core is also moving along. It will be called core_15 (the natural next number in the GPU series). The main changes there have been to incorporate the updated GPU code from OpenMM. OpenMM was based on our FAH GPU MD code to start, but has had several enhancements and additions. In particular, it should be much more stable than the previous FAH GPU MD cores. However, this stability does come at a mild cost in performance. We will address this at the benchmarking stage, since all core 15 (GPU3) projects will start fresh, not continuing existing projects. Voir l'article complet
  15. We've ordered a new class of servers which should make a big impact on FAH server load, and also allow us to release several new big projects with more WUs. We're very excited about this since we've been limited by server space recently, which has also lead to WU shortages. The new servers each have 24 x 2TB drives, so we should have plenty of space. The servers should physically arrive next week, so including set up time, WU testing protocols, etc, it will still take a few weeks to get the new WUs out broadly, but at least the ball is rolling. Voir l'article complet
  16. We have been pretty busy with new cores for FAH and I wanted to give donors an update. 1) SMP2: Gromacs and Desmond. Much effort has gone into our "SMP2" project, the codename for the second generation SMP client. The main goal here was to make it MUCH easier to use. In order to do that, it meant getting rid of our use of MPI. We have had two approaches to this. Both ditch MPI by using threads instead. One was to switch to a new piece of software for the core. This has led to the "Desmond" core, based on software from DE Shaw Research. The second approach was to communicate the MPI issues with the Gromacs developer team and work with them to push for a threads-based Gromacs implementation. Both of these are coming along well and we are testing cores in house. You should hopefully see these cores "in the wild" (i.e. running on FAH) in a month or two, assuming that tests go well. 2) Normal Mode Langevin (NML) Dynamics in the Protomol core. We have been working on another approach to speeding dynamics greatly, based on a new technique called Normal Mode Langevin (NML) dynamics. This method uses the same style models as normal MD (same force fields, etc) and thus should have the same accuracy, but with a pretty significant speedup due to algorithmic advances. NML is complementary to our other methods, so we're hoping to add it to everything else (in particular to the GPU core). To start, we will be testing it in a new core, based on the Protomol software. Protomol is designed to allow for rapid prototyping of molecular simulations, which is perfect for NML. 3) GPU3: Next generation GPU core, based on OpenMM. We have been making major advances in GPU simulation, with the key advances going into OpenMM, our open library for molecular simulation. OpenMM started with our GPU2 code as a base, but has really flourished since then. Thus, we have rewritten our GPU core to use OpenMM and we have been testing that recently as well. It is designed to be completely backward compatible, but should make simulations much more stable on the GPU as well as add new science features. A key next step for OpenMM is OpenCL support, which should allow much more efficient use of new ATI GPUs and beyond. I'm very excited about these new advances. It really should fundamentally improve the key science software behind FAH as well as making the donor experience much more smooth on our more experimental clients (i.e. on GPU and SMP). Voir l'article complet
  17. Two key parts of FAH technology -- OpenMM (the software that powers FAH on GPUs) and MSMbuilder (the algorithms that FAH uses to stitch together hundreds of thousands of donor simulations to get coherent results) are highlighted in this months Biomedical Computing Review. You can download a copy here http://biomedicalcomputationreview.org/5/4/index.html Voir l'article complet
  18. I wanted to continue the series of introducing FAH team members. Greg Bowman has been a key figure in FAH the last few years, especially in methods and software development and its applications to protein folding (eg see the list of software he's made available on Simtk.org). Here's a short intro written by Greg to tell his personal story. Between the second and third grades I lost most of my vision due to a protein misfolding disease called juvenile macular degeneration (JMD). At the time, I was too young to understand the full implications of my disorder; however, they became abundantly clear during middle school. I soon learned that JMD is the result of a few point mutations in a key gene I’d inherited from my parents and, therefore, took a keen interest in exciting developments in molecular biology like the cloning of Dolly the sheep as they pointed to a means of curing diseases like my own. A pressing challenge was then to discover how to make a contribution to molecular biology, given that performing laboratory experiments is extremely challenging with poor vision. Fortunately, I developed a passion for computer science and mathematical modeling during high school and realized that these skills could be brought to bear on biological problems. To prepare myself for a career in biological computing, I completed a major in computer science and minors in biomedical engineering and chemistry at Cornell University, where I also began basic research on protein folding. Now I am performing full-time research on protein folding and misfolding in the Pande lab. While I have not been able to tackle JMD directly at this point, I have developed methods that will be critical for doing so and begun working on other protein misfolding diseases like Alzheimer’s. Voir l'article complet
  19. I just got news that our main power feed has been damaged by the SIM1 construction crew. We may be out of power for over 6 hours. The good news is that most of FAH is immune from this right now. The bad news is that we may have to stop the stats update and turn off the web server which serves the stats. Note that the Assignment servers and all of the work servers are separate from this now, so FAH will be running, even if the stats update is put on hold. Stats would continue to accumulate, just not update on the web site until we turn that back on (if we are forced to bring down those servers). Voir l'article complet
  20. There was a brief, unplanned network outage. There was an unintentional network outage caused by maintenance this morning from about 7:30 to 7:50 AM. Ironically, the work we were performing this morning is part of our redesign to prevent things like this from happening. The good news is that this sort of thing should become much more rare in the future. Voir l'article complet
  21. The scientific code underlying our research has a lot of dials and switches. It can be very tempting to play with those switches to optimize the calculations. However, many of the code settings can impact scientific results in non-obvious ways. It can also hurt scientific reproducibility. It is therefore extremely important not to modify the scientific cores or data for Folding@Home calculations. Returning any work results with modified cores or data can taint both your results and the results of any WU's calculated on the basis of those. Thus even well-meaning optimization can potentially hurt the scientific value of the project substantially. This is why the EULA specifically prohibits such modifications. When you donate to Folding@Home, we give you a number of "dials" at the client level that you can use to adjust your contributions. Many donors find that tuning these "dials" and their machine configuration can optimize their contribution. Please use only the controls provided by the client and the operating system; do not modify the cores or Folding@Home data in any way. Doing so is the equivalent of providing a tainted donation. When we detect such tainted donations, we may remove points awarded, decline further donations by blocking work assignment, or take further action. We greatly appreciate your contributions to Folding@Home; please don't devalue your collective work by undermining the scientific results. Voir l'article complet
  22. We are preparing for the public release of a new work unit category: extra-large advanced methods work units. Some background information is provided below; we'll update this thread with more details as they emerge. Why a new work unit category? We have some specific projects where we 1) have large simulation systems and 2) want to get results fast. As multi-core processors get more powerful, we can perform calculations on Folding@Home that previously required supercomputing clusters. What's different about these work units from the donor perspective? These work units are special SMP work units that have larger upload and download sizes, shorter deadlines, and require more memory and CPU resources. That's why we've created a new category. Is there any points incentive for running these work units? The base value of these work units corresponds roughly to what an SMP work unit using the A2 core would yield on an equivalent calculation. However, because fast completion is a scientific priority for these work units, we are doing a **trial** of a new bonus scheme where faster WU completion yields a points bonus. What systems can run these work units? Right now, only Linux and OS/X systems can run these work units, and they require 8 or more cores. We prefer 8+ *physical* cores, although fast Core i7 machines that are dedicated folders have proven sufficient during the testing process. The points incentives are designed to match appropriate resources to points value; if your machine is marginal for the extra-large work units, you're probably better off running standard SMP. Does this have any relation to the large-points value work units and recent high-scoring users? Yes. The initial projects are 2681 and 2682, valued at ~25K points base. Although these point values seem high, the work units are correspondingly larger, so the base PPD (points per day) value is roughly comparable to standard SMP. A collaborator has donated a large amount of compute time to this project; those clients were initially running under username Anonymous/team 1. To give proper credit for the donation, we have changed the username to PDC, team 1. During the period of this donation, there are at any time between 100 and 400 8-core clients running under this username (800-3200 cores total). Please stay tuned for further details regarding the upcoming release. Voir l'article complet
  23. For those running the windows SMP client, I just wanted to remind donors that the previous client expires today and that an update has been available for a few days. Please see the "Drop-in binary for current Windows SMP console client (6.24) Expires July 4, 2010" on our High Performance Client page (at the bottom of the page): http://folding.stanford.edu/English/DownloadWinOther You can also go directly to the update at this link: Folding@home-Win32-x86.exe Note that this is a "drop in replacement" which means that you would copy this binary on top of the existing binary. For new installations, please use the previous package and install the replacement on top of it. Voir l'article complet
  24. We've got a big stats recredit going on now, mainly for a particular server (vsp07v), but it will also recredit a few missing odds and ends from other servers. The web site will be down during the recredit, but we don't expect it to take too long. UPDATE: the recredit is done and the stats system is doing a regular update now. Voir l'article complet
  25. Maintenance is under way on these machines. Looks like a PDU went down during a equipment move today and the redundant power did not kick in right. Hopefully this will be something as simple as powering up the RAID again and restarting the server. It's 7pm pacific time, so the admins are likely not going to get to this until they get back in the morning, but we'll see. We've filled out a ticket and mentioned this is a very high priority item. We are also monitoring the collection servers for these two servers. They appear to be up and running well, so hopefully this will not impact donors' return of WUs. UPDATE: The servers went back up about an hour ago and have been running well since then. Thanks to our admins for their work on this one. Voir l'article complet
×
×
  • Créer...