Jump to content

PcPerf bot

  • Content Count

  • Joined

  • Last visited

Everything posted by PcPerf bot

  1. With our new web site, we want to unify all of our web assets under the same url and same look (as much as possible). So, we've moved our blog to http://folding.stanford.edu/home/blog For now, we'll keep the old blog here, but we've moved the old posts, so eventually this site will go away as well. Voir l'article complet
  2. Looks like everything is up, except for a single server (VSP07) and its VM's associated with it. This server is serving Core11 GPU clients, so those are off line at the moment. Our sysadmins are working on this now. Voir l'article complet
  3. The networking seems to be up, but there are a few issues. They've got something basic going now and will resolve the remaining issues in the morning. We are running a stats update right now. Voir l'article complet
  4. Looks like the servers are up and healthy, but there is an issue with the networking which central IT is working on. For now, we are in a holding pattern until the networking has been resolved. Voir l'article complet
  5. The maintenance is moving along, but certain key machines are down during the move, most notably the main AS, GPU AS, and some key stats systems. We expect the maintenance to be completed in an hour or two. Voir l'article complet
  6. One aspect which has dominated FAH for a decade is the continual push for new scientific approaches. This manifests itself in terms of new scientific cores. For example, the new GPU core (Core17) has brought huge speed improvements (especially to AMD GPUs) and involved a complete rewrite. A negative consequence of this continuous push for improvement and progress is that newer cores often are restricted to more advanced hardware. To help utilize as much power as is available to FAH, we tend to continue projects with older cores, but eventually, the science they can do becomes too limiting, and we must retire them. This is the eventual fate for all cores, but is most certainly an issue sooner for certain cores, especially cores11 and 78. While we don't have any specific end dates for either, we'd like to remind donors that those cores are reaching "end of life" status, and when they are retired, certain older hardware (eg CPUs that don't support SSE2 or older GPUs) won't be supported by FAH. The bottom line is that we're working to delay that as long as possible, but this post is a heads up that our support of those cores won't last a lot longer. If I had to guess, I'd say probably within a year or so they would be retired, maybe 2 years if the existing projects need additional data. As always, we'll give donors more information as we know it and try to give a more specific end date when we know it. Voir l'article complet
  7. We will be moving many key Folding@home servers on Monday August 26. While much of FAH will be up, certain key systems will be down for a few hours starting in the morning (pacific time) of Monday August 26. We'll give more updates as we move along on Monday. Voir l'article complet
  8. Our primary goal with benchmarking is "equal points for equal work." However, making this process consistent over lots of different types of WUs and different types of hardware is tricky. We had an internal discussion about the PPD for two projects (7810 and 8900) recently and we thought donors might find these details interesting. We were working to rebalance the points to make the PPD consistent, but just doing that over the wide range of hardware is difficult. Check out the graph below which shows the PPD on the y-axis and donor GPUs sorted along the x-axis by typical PPD. The dark line shows averages and the gray area shows error bars (variation between WUs for a given project on the same GPU type). What we see is that our protocol balanced the PPD on the low end, but on the high there is both bigger variation (more shaded areas) and also bigger differences on the very highest power GPUs. In these situations, we usually go with our protocols, but this time, given all the analysis we did on it, I thought it would be interesting for donors to see these sorts of details. It's these sorts of variations which leads to PPD fluctuations, so perhaps the main lesson here is that even with our protocols and plans, it's really hard to be consistent over all the different hardware, even when we're talking about just GPUs and just 2 projects. Voir l'article complet
  9. We've been working behind the scenes on a revamp of our web site. It went live today (http://folding.stanford.edu/home). This is part of our larger plan to make FAH more friendly and easy to use, especially to non-experts. With that said, we're now thinking about next steps to make FAH more fun and appealing to experts, such as computer enthusiasts and gamers. We're in the early stages of deciding what would be useful there. If you have ideas, please do give us some feedback on our forum at this thread: http://foldingforum.org/viewtopic.php?f=16&t=24532 Voir l'article complet
  10. We are proud to announce that our latest GPU core, FahCore 17, was recently moved from beta to advanced testing, the last quality assurance step before a full release. As we previously mentioned, this core is a significant step for us. FahCore 17 is a complete overhaul from our previous GPU cores. It brings a cleaner and more streamlined codebase, new serialization mechanisms that allow us to set up diverse simulations, and improved stability. Its use of OpenCL has united our development, allowing the single core to run on both Nvidia and AMD cards, and theoretically any OpenCL-capable device. It is also our first GPU core to run natively in Linux, although we are only supporting Nvidia GPUs there for the time being as we wait for AMD's Linux drivers to mature a bit more. Overall, this core sets a strong foundation for the future of GPU core development. On AMD cards, FahCore 17 is about 10 times faster than the old GPU cores, and on Nvidia it's about twice as fast. This is mainly due to its OpenMM 5.1 base, which contains many optimizations which deliver a significant speedup. One optimization in particular that we are waiting for is CUDA JIT, a just-in-time compiler that Nvidia may be introducing into its drivers in the coming future. Not only will this technology allow us to offer support for the CUDA platform with FahCore 17, but the JIT compiler is likely to deliver a massive speedup. For the time being, we continue to work at finding additional optimizations on our end. We have also successfully tested FahCore 17 with extremely large proteins (500,000+ atoms), which are on par with the ones used by "bigadv" CPU projects. To run FahCore 17, you need a Fermi GPU or better and Windows or Linux, or a AMD HD5000 or better and Windows. It also currently requires proprietary drivers from these vendors. You can test FahCore 17 by adding the "client-type = advanced" setting into the extra core options in the V7 client, as in the Configuration FAQ. Another excellent resource is the GPU FAQ which describes why GPUs are so helpful to us. We'd like to thank all the alpha testers on FreeNode's #fah IRC channel, as well as the beta testers on foldingforum.org, who have all helped us bring the core to this point! Voir l'article complet
  11. We've been working to streamline the stats system update to minimize downtime for donors. We now are able to update stats without taking the web pages off line, so stats updates will continue every hour, but the stats web pages on our site will continue to be available. Voir l'article complet
  12. A key FAH server is down right now and stats updates have been suspended until it is back up. As always, stats are kept on the Work Servers (WS's) so even if an update hasn't been run, the points are being accumulated as WUs come in, so it's only an issue of updating the database for donors to see. We don't have an ETA on this right now, but our team is working on it. Voir l'article complet
  13. Weâve updated Core 17 with OpenMM 5.1, so checkout the release video for more info: A live Q&A is available on reddit. Some of the key highlights are: -Up to 120,000 PPD on GTX Titan, and 110,000 PPD on HD 7970 -Support for more diverse simulations -Linux support on NVIDIA cards and 64bit OSes -FAHBench updated to use the latest OpenMM and display version information Full Transcript of the Talk: Hi Iâm Yutong, Iâm a GPU core developer here at Folding@home. Today I want to give you guys an update on what weâve been working on over the past few months. Letâs take a look at the three major components of GPU core development. First off, we have OpenMM, our open source library for MD simulations. Itâs used by both FAHBench and Core17. FAHBench is our official benchmarking tool for GPUs, and it supports all OpenCL compatible devices. Weâre very happy to tell you guys that itâs been recently added to Anandtechâs GPU test suite. And Core17 is what your Folding@home clients use to do science. By the way, all those arrows just mean that the entire development process is interconnected. So letâs take a step back in time. Last year in October, we conceived Core 17. And we had three major goals in mind. We wanted a core that was going to be faster, more stable, and to be able to support more types of simulations than just implicit solvent. But because of how our old core 15 and 16 was written, it was in fact easier for us to write the core from scratch. So in November, we started rewriting some of the key parts to replace some pre-existing functionality. Over two months, in January, things started to come together. Our work server, assignment server, and client was modified to support Core 17. We also started an internal test team, for the first time ever, using an IRC channel on freenode to provide real-time testing feedback. In February, Core17 had a public Beta of over 1000 GPUs. And We learnt a lot of valuable things. One of them was that the core wasnât all that much faster it seems on NVIDIA. Though on AMD things certainly looked brighter. Things still crashed occasionally, and bugs were certainly still present. So we went back to the drawing board to improve the core. In April, we added a lot of new optimizations and bug fixes to OpenMM. We tested a linux core for the first time ever on GPUs. And our internal testing team had grown to over 30 people. And that brings us to today. We now support many more types of simulations, ranging from explicit solvent to large systems of up to 100,000 atoms. We improved the stability of our cores. We now have a sustainable code base. We added support for linux for the first time. Itâs also really fast â so Iâm sure the burning question on your mind is, just how fast is it? Well letâs take a look. On the GTX Titan we saw it from 50,000 points per day to over 120,000 points per day. On the GTX 680, we saw it go from 30,000 points per day to over 80,000 points per day. On the AMD HD 7970, we saw it from 10,000 points per day to over 110,000 points per day. On the AMD HD 7870 we saw it jump from 5,000 points per day to over 50,000 points per day. We never want to rest on our laurels for too long. We are already planning support for more Intel devices in the future, such as the i7s, integrated graphics cards, and Xeon Phis. We plan to add more projects to Folding@home as time goes on, so researchers within our group can investigate more systems of interest. And as always, we want things to be faster. Now letâs go back to the beginning again, and hereâs you guys can help us. If youâre a programmer, we invite you to contribute to the open source OpenMM project (available on github at the end of the month on github.com/simtk/openmm). If youâre an enthusiast and like to build state-of the-art computers, we encourage you to run FAHBench and join our internal testing team on freenode. If youâre a donor, weâd like you guys to help us spread the word about Folding@home and bring more people, and their machines of course. Now before I wrap things up, there are some people Iâd like to thank. Our internal testers are on the right hand side, and theyâve been instrumental in providing me with real time feedback regarding our tests. We couldnât have done it this fast without them. On the left hand side, are people within the Pande Group, Joseph and Peter are also programmers like me. Diwakar and TJ helped set up many of our projects. Christian and Robert have always been there for support and feedback. But wait, one last thing. This week, Iâll be doing a Questions and Answers session on reddit at reddit.com/r/folding. So if youâve got questions, come drop by and hang out with us. Thanks, and bye-bye. Voir l'article complet
  14. <p><em><strong>Here's a guest post from <a href="http://iet.open.ac.uk/people/vickie.curtis" target="_self">Vickie Curtis</a>, a Research Student at UK's Centre for Research in Education and Educational Technology.</strong></em></p> <p>I am a doctoral student at the Institute for Educational Technology at the Open University in the UK. I am looking at how digital technologies are changing the way scientists interact with members of the wider public, and I am particularly interested in online 'citizen science' projects such as Folding@home. </p> <p>A few weeks ago we launched an online survey to learn a little more about why people contribute to the Folding@home community, their views about the project, and about âcitizen scienceâ projects in general. Weâve had a great response so far, but would like to keep the survey open for a couple more weeks so that we can capture the views of participants who havenât yet had a chance to take part (we would love to hear from more women who contribute to Folding@home).</p> <p>The survey should take about 10 minutes, and the feedback will eventually be shared with you via the website and blog. All the information you supply will be kept on a secure server and not passed to any third parties. If you would like to take part, please follow the link below.</p> <p> Many thanks to those who have already contributed!</p> <p><a href="http://www.survey.bris.ac.uk/open/foldingathome" target="_self">http://www.survey.bris.ac.uk/open/foldingathome</a></p> Voir l'article complet
  15. <p><strong><em>Today, we have a guest blog post by Vickie Curtis, a research student in the UK's Centre for Research in Education and Educational Technology. She's working with the Folding@home team to glean more feedback from donors.</em></strong></p> <p><strong>Would you like to learn more about the Folding@home community and your contribution to it? </strong>I am a doctoral student at the Institute for Educational Technology at the Open University in the UK. I am looking at how digital technologies are changing the way scientists interact with members of the wider public, and I am particularly interested in online 'citizen science' projects such as Folding@home. </p> <p>Folding@home is one of the longest-running and most successful online citizen science projects, and it would be great to know a little more about why people contribute to the Folding@home community, their views about the project, and about these types of project in general. I have prepared an online survey for participants, which should take about 10-15 minutes to complete. The feedback will be shared with the Folding@home team and may help them to make improvements to the project. I will also share the findings with you via the website and blog. </p> <p>All the information you supply will be kept on a secure server and not passed to any third parties. If you would like to take part, please follow the link below.</p> <p><a href="http://www.survey.bris.ac.uk/open/foldingathome">http:/www.survey.bris.ac.uk/open/foldingathome</a></p> Voir l'article complet
  16. <p>We have been aggressively working on OpenMM (the key code used in the FAH GPU cores), creating new algorithms to increase performance on NVIDIA and AMD GPUs. The results have been pretty exciting. With OpenMM 5.1 (vs OpenMM 5.0, used in the current core 17 release), we are getting about a 2x speed up on typical FAH WU calculations, which will lead to an automatic 2x increase in PPD once this software is out of beta testing and integrated into core 17. </p> <p>There's a lot of testing to do and it's very possible that these numbers will change, but the results were so exciting that I wanted to give donors a heads up. Here's some numbers that we're seeing:</p> <p style="text-align: left;"><strong>OpenCL running on the GTX 680: </strong>The first 2 columns are nanoseconds per day (i.e. how much science gets done in a GPU day) and the 3rd column is the speedup of 5.1 over 5.0.</p> <table cellpadding="0" cellspacing="0" style="margin-left: auto; margin-right: auto;"> <tbody> <tr> <td valign="middle"> <p><strong>Type of Calculation</strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">OpenMM 5.0</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">OpenMM 5.1</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">Speedup</span></strong></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">Implicit hbonds</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">92</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">134</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.46</span></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">Implicit hangles</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">153</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">209</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.36</span></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">RF hbonds</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">31.4</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">78.1</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">2.49</span></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">RF hangles</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">58</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">113.0</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.95</span></p> </td> </tr> <tr> <td valign="middle"> <p><strong><span style="font-size: 10pt;">PME hbonds</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">19.6</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">41.5</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">2.12</span></strong></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">PME hangles</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">37.3</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">66.9</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.79</span></p> </td> </tr> </tbody> </table> <p style="text-align: center;"> </p> <p><strong>OpenCL running on a Radeon HD 7970: </strong>The first 2 columns are nanoseconds per day (i.e. how much science gets done in a GPU day) and the 3rd column is the speedup of 5.1 over 5.0.</p> <table cellpadding="0" cellspacing="0" style="margin-left: auto; margin-right: auto;"> <tbody> <tr> <td valign="middle"> <p><strong>Type of Calculation</strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">OpenMM 5.0</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">OpenMM 5.1</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">Speedup</span></strong></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">Implicit hbonds</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">87</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">120</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.38</span></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">Implicit hangles</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">96</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">104</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.09</span></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">RF hbonds</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">33.5</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">83.5</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">2.49</span></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">RF hangles</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">51.8</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">90.2</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.74</span></p> </td> </tr> <tr> <td valign="middle"> <p><strong><span style="font-size: 10pt;">PME hbonds</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">21.8</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">49.3</span></strong></p> </td> <td valign="middle"> <p><strong><span style="font-size: 10pt;">2.26</span></strong></p> </td> </tr> <tr> <td valign="middle"> <p><span style="font-size: 10pt;">PME hangles</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">34.6</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">63.0</span></p> </td> <td valign="middle"> <p><span style="font-size: 10pt;">1.82</span></p> </td> </tr> </tbody> </table> <p> </p> <p>Note that "PME hbonds" is likely the most common calculation that we plan to run in the near term with core 17. We're very excited about the way this is shaping up and think that donors would be curious to know where this is going. </p> Voir l'article complet
  17. <p>Guest post from Dr. Greg Bowman, UC Berkeley</p> <p>Prof. Vince Voelzâs lab has published an exciting paper on their recent successes with predicting the structures of protein-like molecules called peptoids (<a href="http://www.pnas.org/content/early/2012/08/13/1209945109.abstract">here</a>). Peptoids are similar to proteins but with a rearrangement in their chemistry (see example below). Their similarity to proteins allows peptoids to function like proteins. However, the alteration in peptoid chemistry relative to proteins effectively makes them invisible to parts of the immune system designed to recognize foreign proteins. Therefore, peptoids are an attractive option for drug design. To fully realize this potential, we need to be able to predict the structures of peptoids and design them to perform specific functions. The Voelz labâs work demonstrates that computer simulations can provide this sort of information by presented predicted structures of a number of peptoids along with experimental structures confirming the accuracy of their predictions (see example below).</p> <p> </p> <p><img alt="" height="114" src="http://folding.typepad.com/.a/6a00e54ef157d78834017d420dae0b970c-pi" width="342" /></p> <p>Peptide vs. peptoid chemistry. In peptoids, a group of atoms (called R) is moved from a carbon to an adjacent nitrogen (N).</p> <p> </p> <p><img alt="" height="190" src="http://folding.typepad.com/.a/6a00e54ef157d78834017d420dae16970c-pi" width="222" /></p> <p>An example of one of the Voelz lab's predicted structures (in green) overlaid with the experimental structure (in white). </p> <p> </p> Voir l'article complet
  18. <p><a href="http://proteneer.com/blog/?p=1819" target="_self">Weâve released FAHBench 1.0</a>, with a new slick GUI that should make it much more accessible to new comers. Click on the FAHBench link above or the image below to try it out! Donât worry, it maintains backwards compatibility with the old command line interface.</p> <p>More info at <a href="http://fahbench.com/" target="_self">http://fahbench.com/</a></p> <p><a href="http://www.fahbench.com/" rel="attachment wp-att-1810" target="_self"><img alt="FAHBENCH" height="358" src="http://proteneer.com/blog/wp-content/uploads/2013/01/FAHBENCH.png" width="516" /></a></p> Voir l'article complet
  19. <p>The Quick Return Bonus (QRB) gives more points when WUs are completed quickly. This helps keep the points in line with the science. Now with the GPU core maturing, our plan is to treat all WUs identically, i.e. benchmark on a single benchmark machine (SMP) and use those points. Now that we can do just about any calculation on any piece of hardware, it's strange to benchmark them separately. That wasn't the case before where the capabilities of the GPU and SMP cores were very different. </p> <p>With the new GPU core (17), we'll have that matching capability. Our plan is to introduce QRB to GPUs with the rollout of production core 17 WUs.</p> Voir l'article complet
  20. <p>We often have to make difficult decisions on what hardware to support in the future, including adding new platforms or removing existing ones. Removing existing platforms always leads to a lot of disruptive change for donors, so we try to do this as rarely as we can. In particular, in the GPU1 to GPU2 transition, there was a big change done quickly, which was extremely hard on donors.</p> <p>For GPUs in particular, the central issue is that in general, GPU technology keeps on progressing and GPU manufacturers come up with new ways to do things which make the old ones obsolete. So, it's probably safe to say that until hardware design innovation changes, *eventually* older GPUs will become obsolete for FAH. We try to keep as many GPUs working as long as possible, but eventually it becomes a losing battle, as we have only a fixed number of programmers and more GPU types (even from a given vendor, say different CUDA capable levels) require more programmers to keep up, and eventually we run out of resources.<br /><br />Right now, there's a division with the Fermi cards. Fermi and later cards have powerful new capabilities that the older cards do not have. So, I can imagine eventually we'll run out of tricks to support the older cards. I can't predict when that will be, as it depends on lots of things, but I can say we're trying hard to support everything as long as we can.</p> <p>One of the biggest issues is that scientific needs can change based on where the science takes us and that's particularly hard for us to predict. We'll try to let donors know as soon as we can if there will be any changes, but some donors questions prompted this blog post so that at least donors have some sense of how our decisions are made internally.</p> Voir l'article complet
  21. <p>Guest post from Dr. Gregory Bowman, University of California, Berkeley</p> <p>One objective of the Folding@home project is to provide insight into general protein folding mechanisms that hold across a variety of systems. For example, how much slower, on average, do longer proteins take to fold than shorter ones? To address questions like this, Lane and Pande recently developed a theoretical framework for testing different models of the scaling of protein folding times with chain length against experimental observations (line <a href="http://arxiv.org/abs/1301.4302">here</a>). One of their major conclusions is that there is insufficient data to distinguish between many of the possible models. All is not lost though. The framework they developed also allows them to point out the types of measurements that need to be made to conclusively determine which of the models that have been proposed is best. In the future, it will be exciting to begin both computational and experimental studies of these systems. </p> Voir l'article complet
  22. <p>Today, we have a guest post by Team MaximumPC. Two new work unit servers have been funded by the Folding@home (FAH) community in memory of Gordon T. Smitheman and his wife, Rose Anne.</p> <p><a class="asset-img-link" href="http://folding.typepad.com/.a/6a00e54ef157d78834017ee90030f1970d-pi" style="float: left;"><img alt="GordonSmitheman" border="0" class="asset asset-image at-xid-6a00e54ef157d78834017ee90030f1970d" src="http://folding.typepad.com/.a/6a00e54ef157d78834017ee90030f1970d-800wi" style="margin: 0px 5px 5px 0px;" title="GordonSmitheman" /></a>Gordon, a dedicated member (username: <a href="http://fah-web2.stanford.edu/cgi-bin/main.py?qtype=userpage&username=gsmitheman" target="_self">gsmitheman</a>) of Team MaximumPC (#11108), passed away January 25, 2012. His death came just a couple of months after his beloved Rose passed away on November 27, 2011 after a battle with pancreatic cancer.</p> <p>As evidenced by his 17,000 posts in the MaximumPC Folding Forum, Gordon was very active at promoting the project. It wasnât just the quantity, but the quality of his posts that made him a revered teammate. His calling card was recognizing others: welcoming new folders, celebrating personal and team milestones, coordinating contests with prizes of hardware donated by him and others, and other posts that encouraged active participation in the forum. He also visited other teamsâ forums to extend greetings and recognize their milestones.</p> <p> <a class="asset-img-link" href="http://folding.typepad.com/.a/6a00e54ef157d78834017d418c591f970c-pi" style="float: right;"><img alt="RoseSmitheman" border="0" class="asset asset-image at-xid-6a00e54ef157d78834017d418c591f970c" src="http://folding.typepad.com/.a/6a00e54ef157d78834017d418c591f970c-800wi" style="margin: 0px 0px 5px 5px;" title="RoseSmitheman" /></a>The impact of losing such a valuable teammate and genuinely kind-hearted person was hard to put into words, but many did their best in a forum thread titled â<a href="http://www.maximumpc.com/forums/viewtopic.php?f=32&t=133617" target="_self">We Lost One of F@H's Greatest Champions - Gordon Smitheman</a>.â</p> <p>Teammate and FAH beta tester Michael âDocâ McCord, MD proposed a server purchase as a tribute to the tireless dedication Gordon demonstrated in promoting folding. The generosity of donors, representing numerous teams throughout the world, resulted in the funds to purchase not one, but two servers for the project.</p> <p>Gordon greatly valued his family. It was one of the few things that took precedence over folding. But, his family was very supportive of his FAH activities. It was for that reason naming the second server after Gordonâs wife, Rose, was a natural choice.</p> <p>Many thanks to Dr. Pande, Dr. McCord, Team MaximumPC, and everyone who donated funds that made this commemoration possible. Fold On!</p> Voir l'article complet
  23. <p>As <a href="http://proteneer.com/blog/?page_id=1671" target="_self">previously blogged</a>, FAHBench is the official <a href="http://folding.stanford.edu/">Folding@Home</a> GPU benchmark. It measures the compute performance of GPUs for Folding@Home. In addition, by use of a loadable DLL system, it provides vendors and skilled hackers with a method make customized-plugins and test their results.</p> <p><strong>Info</strong><br />Current release 0.5 â Jan/30/2013 (Stable)<br />-Updates:<br />-Minor performance improvements<br /><a href="http://proteneer.com/fahbench/FAHBench_0_5.zip">Download Link</a> (You must agree to Disclaimer on the bottom)</p> <p><strong>Usage</strong><br />The program runs from the command line â allowing you to set up runs as needed. For example:<br /><code>FAHBench.exe -deviceId 0 -platform OpenCL -precision single</code><br />Uses the OpenCL platform on the 0th (0-indexed) device, in single precision mode<br />NVIDIA cards can use either the OpenCL or the CUDA platform. ATI cards must use the OpenCL platform.</p> <p><strong>Contact</strong><br />Yutong Zhao (proteneer at gmail)</p> <p><strong>Requirements</strong><br />OpenCL:<br />Windows (tested on Win7, untested on others)<br />Latest NVIDIA/ATI Drivers (Exclusive-Or)<br />CUDA:<br />Microsoft has deprecated MSVS 2008 Express, the CUDA platform is no longer supported until NVIDIA ships a built-in JIT compiler.</p> <p><strong>Current Stats</strong><br />These numbers measure the expected number of nanoseconds per day on a realistic simulation system. Time step used in integration is set to 2 femtoseconds.</p> <pre><span style="font-family: 'courier new', courier; font-size: 10pt;"><strong>System 1: DHFR, Single Precision</strong> OpenCL HD7970: (Explicit 18.1 ns/day | Implicit 101.3 ns/day) Tesla K20: (Explicit 18.1 ns/day | Implicit 84.5 ns/day) GTX 660Ti: (Explicit 16.1 ns/day | Implicit 77.0 ns/day) HD4000: (Explicit 3.2 ns/day | Implicit 18.0 ns/day) i7-3770K: (Explicit 3.1 ns/day | Implicit 3.4 ns/day) (more to come)</span></pre> <p> </p> <p><strong>Developers</strong><br />It is very easy to test your own modifications the source code. Though note that the GPU code in OpenMM is LGPL â you must be able to provide us (the OpenMM developers) changes to the source code upon request.</p> <p>1. Download the latest OpenMM from the svn repo: <code>svn checkout https://simtk.org/svn/openmm </code><br />2. Build the projects OpenMM OpenMMSerialization OpenMMCUDA OpenMMOpenCL using CMake and <em>VS2008</em><br />3. Replace the .dlls in the FAHBench folder with your own</p> <p>It is not recommended to directly use FAHBench to ensure accuracy. OpenMM has its own suite of detailed unit tests that can pinpoint problems much easier. FAHBench serves as a final check.</p> <p><strong>Links</strong><br /><a href="http://folding.stanford.edu/">Folding@Home</a><br /><a href="http://openmm.org/">OpenMM</a></p> <p><strong>Disclaimer</strong><br /><code>IN NO EVENT SHALL THE AUTHORS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF STANFORD UNIVERSITY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE AUTHORS SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOTLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE AND ACCOMPANYING DOCUMENTATION PROVIDED HEREUNDER IS PROVIDED "AS IS". THE AUTHORS AND STANFORD UNIVERSITY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.</code></p> Voir l'article complet
  24. <p>As also announced on OpenMM/Folding@home programmer <a href="http://proteneer.com/blog/?p=1767" target="_self">Yutong "proteneer" Zhao's web site</a>, we are happy to announce that Folding@Home Core 17 has entered Beta. Externally, you probably wonât notice too much of a difference. Internally, this is a complete overhaul that brings many new features, and sets a strong foundation for the future of GPU core development. In addition, the restructuring brings much tighter integration of the core with the rest of the development within Folding@Home.</p> <p>Weâre also introducing an explicit solvent project (p7661) as part of the Beta. To keep the credit assigned to these projects consistent with previous explicit solvent work units run on CPUs, we are also awarding a quick return bonus with a k-factor of 0.75. This reflects the additional scientific value of these units, and keeps the Folding@Home credit awards consistent across different architectures.</p> <p><strong>Usage:</strong><br />This is a still a very new core, a lot of the features have yet to be fully tested. Thus, as is the beta policy, no official support is given. You must enable the -beta flag on FAHClient, ie. set client-type=beta. If youâre using client 7.2.x or earlier, there are two options:<br />1. Specify -gpu-vendor=XXX, where XXX is either nvidia or ati<br />2. If -gpu-vendor is not specified, the core will automatically guess the platformId.<br />Otherwise, 7.3.6 lets you specify the particular -gpu-vendor as an option<br />Supported NVIDIA cards: Fermi or better (Titan does not work atm, as NVIDIA needs to publish new OpenCL drivers)<br />Supported ATI cards: HD5000 or better<br />As always, please use the latest drivers (Win XP is NOT supported due to super-old AMD drivers).</p> <p><strong>Key Features:</strong><br /><em>Cleaner Code</em><br />We have deprecated several layers of GROMACs and other wrappers as the old architecture severely limited the types of simulations that can be run. Much of the work on the new core has been to replace existing features. The resulting code is now more streamlined and integrated. We also anticipate that this major re-design will allow us to introduce new features into the Folding@Home much faster.</p> <p><em>Serialization</em><br />We have introduced a new serialization mechanism that allows Pande Group researchers to setup significantly more diverse simulations. Pande Group researchers can now easily setup jobs and projects using Python (with a much richer and easier to use set of libraries), while the core maintains its speed by virtue of being written in C++. We achieved this using a serialization technique, whereby all details of a simulation are encapsulated using a standardized format that is then be passed around between language barriers. This also drastically reduces the dependencies needed by the Work Server and other parts of Folding@Home.</p> <p><em>A single unified core now runs both NVIDIA and AMD cards</em><br />Before we had two development branches for NVIDIA and AMD cards. It was a difficult and cumbersome task to debug and maintain. We couldnât easily mix runs and gens produced by different GPU types. Now, using OpenCL, a single core supports not only AMD and NVIDIA, but theoretically any OpenCL-capable device.</p> <p><em>Improved Stability</em><br />By reducing the amount of boilerplate code, weâve also increased the robustness and stability of the core. The log files should also now be much more informative. There are also a lot of useful debugging features built right into the core to help PG developers nail down hard to find bugs.</p> <p><strong>Extra special thanks to our testers:</strong><br />Jesse_V, k1wi, artoar_11, bollix, ChelseaOilman, bruce, Demonfang, Grandpa_01, EXT64, Flaschie, HayesK, jimerickson, mmonnin_, P5-133XL, Patriot, rhavern, sc0tty8, SodaAnt, SolidSteel144, EvilPenguin, art_l_j_PlanetAMD64, thews, cam51037, Pin, muziqaz, baz657, PantherX, QINSP, Schro, and hootis.</p> Voir l'article complet
  25. <p>OpenMM is a key part of Folding@home, powering its GPU cores. You can learn more about OpenMM at its youtube page, which includes technical videos on how you can incorporate OpenMM into your code. It also includes an introduction to Markov State Models (MSMs), which is a key technology used in Folding@home.</p> <p><a href="http://www.youtube.com/user/SimbiosOpenMM" target="_self">http://www.youtube.com/user/SimbiosOpenMM</a></p> <p><iframe frameborder="0" height="344" src="http://www.youtube.com/embed/yLnjbjtRgdg?feature=oembed" width="459"></iframe> </p> <p><iframe frameborder="0" height="344" src="http://www.youtube.com/embed/0pB3pUXULmo?feature=oembed" width="459"></iframe> </p> Voir l'article complet
  • Create New...