
INDUSTRY NEEDS
Manufacturing is a complex and increasingly more costly undertaking. Reducing this complexity and cost is a central goal of the National Center for Manufacturing Sciences. There are literally hundreds of thousands of potential factors impacting a manufactured product including the introduction of new materials, environmental concerns, and the need for timely product commercialization. It is critical that these factors be evaluated during the design phase of product development, prior to full scale production. Potential, costly flaws can be identified and corrected at this phase and it is also possible to identify innovative product configurations.
VISION
NCMS is developing services and collaboration on cross industry projects that will revolutionize product design for manufacturers. The High Performance Simulation for Product Design (HPSPD) program will enable manufacturers of all sizes to use super computer processing power to evaluate their product designs during the development stage. NCMS will bring together technology developers, providers and end users to create a set of tools that are affordable, timely, and speed the commercialization of manufactured products to the marketplace. We believe that this product development methodology will become an industry standard over the next decade. Early adopters of HPSPD will have a distinct competitive advantage over their global competitors.
CAPABILITY
NCMS has identified Decision Incite Inc. as a leading innovator in this emerging technology. The Decision Incite team has developed a process called Simulation Supported Decision Making (SSDM) which simulates millions of variables to pinpoint the top five to twenty most influential characteristics of a new design. Identifying these hypercritical characteristics help companies focus on design aspects that will have the most impact on the final product. This benefit alone would make SSDM an invaluable tool to manufacturers.
But the SSDM process also helps identify “outliers” which are potential configurations that are not normally visible to traditional experience-based product design methods. These outliers can help identify potential problems and new innovative configurations that can greatly increase profitability and utility to the end user.

The SSDM process requires access to high performance computing power which in the past was not available to most manufacturers. SSDM utilizes the relatively new availability of surplus high performance computing capacity to make design capability affordable. The SSDM tool is linked to supercomputing centers to quickly generate Insight Maps. These Insight Maps are used to focus work into the areas deemed of highest importance and impact in order to understand the cause and effect of changes made during the product design phase. The obvious benefit is that identifying potential issues early on, helps reduce costs since design improvements can be made prior to actual production.Traditionally design decisions have been made on educated guesses based on experience. This process is no longer suitable to the manufacturing of complex goods incorporating new materials, sensors, electronics and other innovations. As flaws between relationships are discovered, correction of otherwise expensive production changes and product liability issues follow.

The SSDM process begins with the creation of a simulation-generated Insight Map being applied to the results of a Monte Carlo Simulation (MCS). The data is generated by running multiple physics-based analyses of a parameterized computer model, varying parameters across their natural ranges with each run. The process accurately models reality, incorporating variability and uncertainty. Results are a cloud of points with each point being an accurate result of that specific combination of variables.

MCS is easy to use, with no special algorithms or methods required. The process is independent of the number a variables found in other testing methods, which leads to a fundamental change in engineering. That change in historic engineering is the first step of the process in which assumptions are made to be able to simplify a problem in order to solve it. Instead of making assumptions to limit the number of variables, this process promotes including as many variables as possible, enabling engineers to use Insight Maps as tools to learn from simulation which can reveal unanticipated events and reduce product risk.
CONCLUSION
Detailing an innovative product during the design phase is a natural evolution in lean manufacturing. It is vital that manufacturers prepare to integrate this capability into their design process. Moreover, as the manufacturing space becomes more competitive, companies should look to this capability as a means to reduce risk and speed commercialization of their products. NCMS will be at the ground floor of this dynamic change in the engineering process. Learn more on this project from NCMS http://www.ncms.org and Decision Incite Inc http://www.decisionincite.com at the NAFEMS NA regional summit October 29th-31st, 2008 in Hampton, VA. Visit http://www.nafems.org for more information.
ABOUT NCMS
The National Center for Manufacturing Sciences (NCMS) was founded in 1986 and is the largest cross-industry collaborative R&D consortium in North America. NCMS collaborative teams have an extensive track record of creating and commercializing innovative technologies.

I suggest that the NCMS raise their sights from High Performance Computing to High Performance Information and Choice Making.
High Performance Information and Choice Making focuses on the benefit for NCMS clients, rather than on a presumption of appropriate technology.
Simulation reveals the anticipated behavior of a system model. Anticipated behavior guides systems design choice-making which becomes ever more important as the system gets bigger in extent, variety and ambiguity. However, the need for High Performance (meaning high cost relative to other platforms) stems from the notion that computing is the better platform for anticipating system behavior. NCMS should focus on the Ends — Adequate, Accurate and Timely foresight about requisite system behavior — not the Means — High Performance Computing.
In fairness other kinds of platforms have not been available. However, with the advent of new processing architectures, typified by Patent Reg. # 7392229.B.2, it is now reasonable to think of a $100 chip doing the work of 3,400 microprocessors — in microseconds.
If you want to look at the question of alternatives to high performance (high cost) computers, a good overview may be seen at "The Von Neumann Syndrome", R. Hartenstein, downloaded January, 14, 2008 from
http://www.fpl.uni-kl.de/staff/hartenstein/Hartenstein-Delft-Sep2007.pdf
If you want to know more about the significance of the General Purpose Set Theoretic Processor described in Patent Reg. # 7392229.B.2 I will be happy to share a white paper regarding a Systems Viability and Verification Capabilty. Please advise.
Hope this helps move things along,
Jack Ring
Co-founder, Kennen Technologies LLC
Fellow, International Council on Systems Engineering
SSDM is something that has been around for awhile. Running mathematical models to find outliers is really not a new concept. The math is very complex though which is the reason most of us have had to make lots of assumptions to make the equations manageable to solve. It takes a lot of computing power to make this work. In the past only large corporations had the budgets to buy the equipment to run more complex models. This is the reason why you typically saw this type of analysis only run by large automotive, aerospace, and defense companies. I think the time has finally come where you can actually run these models without the need of making hundreds of assumptions. The cost of computing power is really cheap now and the number of calculations per second has dramatically increased in the last decade.
I think that this type of software will now put the SSDM analysis in the hands of the middle tier manufacturers. This is a good thing since it should improve products and bring about more competition in areas that typically have been the domain of larger companies. It also should bode well for the consumer who should see the effects of this type of product quality improvement in cheaper and better products.
This type of product can also benefit in other areas where it is not used as often if at all like building materials, replacement parts, safety equipment, renewable energy equipment, and healthcare.
The smaller manufacturers may have to ramp up for this type of software by dedicating some engineering resources to figure out the variables, assumptions, etc. to create a realistic model. This is the hard part. You still have to know your product and how to successfully model all the constraints.
However, this probably nothing more than hiring a couple of PhDs or consultants to help you these companies get rolling. The end result should be interesting.
Through my work developing and marketing new consumer packaged goods (CPG), I recognize the importance of getting finding issues early. Increasing pressure to decrase lead times and development costs allows less time to comb through every possible option. I can see how this sort of solution could support a hypothesis-based development approach by helping developers and decision-makers identify and focus on the more critical development areas.
Additionally, I could see business applications beyond product development in general business development. My team has recently gone through an exercise in which we selected key target geographies based on a variety of criteria including sales, growth, market size, margin, etc. We were limited by the number of variables we could handle. This solution would seem to eliminate that constraint.
While I see the potential of this sort of solution, I have trouble seeing how WELL it might translate to business solution and/or CPG product development. Development of a complex model may prove more effort than the solution would ultimately be worth given the short cycle time and lower overall price (i.e. lower risk) in this market. A super-computer may be more power than I might need.
Any thoughts on simpler applications of this solution? Could you provide an example of this tool (or one like it) being used (i.e. sample variables, sample inputs, etc) so I could better translate this into my potential needs?
Thanks,
Ward Elwood
Sr. Brand Manager, Developing & Emerging Markets
Kimberly-Clark
As a Simulation Engineering Instructor, I can attest to the fact that Physics-Based Simulation has already proven to be a valuable tool in manufacturing and other industries, literally saving hundreds of millions of dollars in mistakes by catching those mistakes in the design phase.
Automotive Manufacturing now does nearly all Robotic programming using Off-Line Simulation, NIST uses Simulation, Russian Scientists are using it to clean up Chernobyl, the navy uses it to help design atomic submarines, and there are several Sim Software products that will run on laptops with good results. So extremely expensive mega-computers are not needed in most cases.
The major caveat involves GIGO.
Well-Trained and Experienced operators are required to run the software, and they need real-world experience in the things they are trying to simulate.
Freshly-graduated Computer Science majors who have never worked in the environments simulated, simply will not do.
DARPA found this to be true, and tried to get operators trained in Community Colleges using a project called "Conduit" in the late 90′s. I was the instructor on this project, but it failed due to poor management.
You don’t need PhD’s. You need Shop Rats with Associates Degrees !
Here is a link to a recent Wired article that gives some background on the theory behind what NCMS is attempting.
[quote ]…The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.
At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn’t pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right…[/quote ]
http://www.wired.com/science/discoveries/magazine/16-07/pb_theory
In my last post, I may not have explained myself sufficiently.
I SHOULD have said: "What you need is Shop Rats with an Associates Degree IN SIMULATION ENGINEERING"
I do not want to disparage Computer Science Majors, because they are talented and learned individuals. But what I typically saw, is that those were the types of graduates they were hiring to learn Simulation Engineering, and that was TOTALLY wrong !
What Simulation Engineers need the MOST is HANDS-ON knowledge of THE PROCESSES THEY ARE TRYING TO SIMULATE, NOT how computers work.
THAT’S why Shop Rats are better suited to be Simulation Engineers than
Computer Science Majors.
Computer Science Majors usually attempt things that are not practical in the real world, such as holding a car hood with a suction cups end-of-arm fixture, and moving it at 2000 MM/sec, which works just fine in the computer world, but fails miserably in the real world, where the AIR RESISTANCE blows the hood off the suction cups before the destination is reached.
A Shop Rat who has, for example, done Robot Programming on the Shop Floor, won’t make that mistake.
It is very encouraging to see the comments to this post. There are a couple items that are important to clarify about the High Performance SIMULATION capability being established. The focus is on quickly getting reliable information by taking advantage of:
- COMMODITY COMPUTING meaning that the cost per CPU-Hr is now measured in cents, from 10 cents to 60 cents per CPU-Hr. There is no need to purchase expensive hardware as it is being offered on-demand from companies ranging from Amazon to IBM.
- MINIMAL ASSUMPTIONS as the Monte Carlo process used in Simuation Supported Decision Making is independent of the number of variables in the problem and is mathmatically simple. A friend who is an MIT mathmatician relayed that "the Monte Carlo process is NOT mathmatically elegent (meaning complex), but it just gives you the right answers." This changes the problem definition from historically making simplifying assumptions to solve the problem, to incorporating as many variables as possible (minimizing assumptions) and letting the computer analysis sort out what is important (vice assuming).
The key point is that engineering analysis has historically been deterministic, in which each variable has one value. We are now promoting stochastic simulation, in which each variable has a range of values (as in what really exists). Stochastic (or Monte Carlo) simulation enables significantly more information from a model, such as identification of relationships between variables, and outliers (combinations of variables that generate non-intuitive results.
The barrier to using stochasic simulation in the past has been the computational cost and time needed to run hundreds or thousands of analysis runs. Advances in computers have enabled stochastic simulation with CPU costs of pennies per hour. With this capability every engineering analysis should be done stochastically. I presently demo a simple stress calculation on an I beam in Excel on my laptop to show how much more information can be gained from a model using stochastic simulation. Car companies have been conducting stochastic simulation of car crash for years. The process is independent of problem size. Every analysis should be run stochastically to take variability into account and get orders of magnitude more information – because you can today for minimal extra cost with commodity computing.
Another key point raised by Jerome Aiello is with the need for this to be used by people who have real world experience ad understanding in what they are doing. A poor model will give poor results. (A side benefit that we have found with stochastic simulation is that it can often detect poor analysis models as the simulation will not run on some combinations of variables as a result of poor modeling practices.)
Simulation Supported Decision Making is a way to take advantage of commodity computing to provide a way to learn from "virtual" experience complementing out real experience.
Feel free to call or e-mail if anyone has further questions or would like more informaiton on SSDM. I can be contacted at geneallen@decisionincite.com or at 703-582-5554.
Any recent design project involving a complex integrated product that I know about has used simulation, if only here and there, rag-tag. More comprehensive practice is necessary, as at Cummins, whose new 6.7L turbo diesel was based on round-the-clock simulation using a computing facility in India. Components had to reinforce each other. Parts and sub-systems could not just "bolt onto the block." Developing this design capability was more significant to Cummins than the engine, the first to be designed that way.
That said, I wonder if we are suggesting a tool before fully defining the problem. For example, in a prior life I worked an engineering information center. Found that half the job was probing what requesters were really trying to do before helping them use that old rickety system. That is, for the time, system capability was no problem; deciding what to do with it was. Doubt if the human element has changed much, so if reducing development time and loop backs is the objective, there’s more to it than simulation capability.
Of course, simulation requires validating that both model and data are relevant to design intent, but what of possibilities and data that are not known; not included in the simulation? I’m somewhat aware of Mike Gnam and Paul Chalmer’s work. They begin to address this. DfX is a huge shift toward integrative design. Unknowns and data gaps are common. If Decision Incite addresses this, I did not pick it up from their site. Many designers have a limited concept of DfX; and relevant data for it are not easy to come by if they do. That is, if designers are not clamoring for detailed simulation capability today, why?
Robert W. "Doc" Hall
Editor-in-Chief, Target
Association for Manufacturing Excellence