Can Digital Computers Think? (1951)——Alan Turing 轮AI

媒体把一个概念炒热以后,AI的概念就脏掉了,把很多幻想的东西当现实。很多年前大佬接受BBC采访。所以这个文本一开始就是说给麻瓜听的,但是又是开创行业的大佬,所以足够浅显、通俗、犀利、深刻。


Digital computers have often been described as mechanical brains. Most scientists probably regard this description as a mere newspaper stunt, but some do not. One mathematician has expressed the opposite point of view to me rather forcefully in the words ‘It is commonly said that these machines are not brains, but you and I know that they are.’ In this talk I shall try to explain the ideas behind the various possible points of view, though not altogether impartially. I shall give most attention to the view which I hold myself, that it is not altogether unreasonable to describe digital computers as brains. A different point of view has already been put by Professor Hartree.

First we may consider the naive point of view of the man in the street. He hears amazing accounts of what these machines can do: most of them apparently involve intellectual feats of which he would be quite incapable. He can only explain it by supposing that the machine is a sort of brain, though he may prefer simply to disbelieve what he has heard.

The majority of scientists are contemptuous of this almost superstitious attitude. They know something of the principles on which the machines are constructed and of the way in which they are used. Their outlook was well summed up by Lady Lovelace over a hundred years ago, speaking of Babbage’s Analytical Engine. She said, as Hartree has already quoted, ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.’ This very well describes the way in which digital computers are actually used at the present time, and in which they will probably mainly be used for many years to come. For any one calculation the whole procedure that the machine is to go through is planned out in advance by a mathematician. The less doubt there is about what is going to happen the better the mathematician is pleased. It is like planning a military operation. Under these circumstances it is fair to say that the machine doesn’t originate anything.

There is however a third point of view, which I hold myself. I agree with Lady Lovelace’s dictum as far as it goes, but I believe that its validity depends on considering how digital computers are used rather than how they could be used. In fact I believe that they could be used in such a manner that they could appropriately be described as brains. I should also say that ‘If any machine can appropriately be described as a brain, then any digital computer can be so described.’

This last statement needs some explanation. It may appear rather startling, but with some reservations it appears to be an inescapable fact. It can be shown to follow from a characteristic property of digital computers, which I will call their universality. A digital computer is a universal machine in the sense that it can be made to replace any machine of a certain very wide class. It will not replace a bulldozer or a steam-engine or a telescope, but it will replace any rival design of calculating machine, that is to say any machine into which one can feed data and which will later print out results. In order to arrange for our computer to imitate a given machine it is only necessary to programme the computer to calculate what the machine in question would do under given circumstances, and in particular what answers it would print out. The computer can then be made to print out the same answers.

If now some particular machine can be described as a brain we have only to programme our digital computer to imitate it and it will also be a brain. If it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer, suitably programmed, will behave like a brain.

This argument involves several assumptions which can quite reasonably be challenged. I have already explained that the machine to be imitated must be more like a calculator than a bulldozer. This is merely a reflection of the fact that we are speaking of mechanical analogues of brains, rather than of feet or jaws. It was also necessary that this machine should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.

Another assumption was that the storage capacity of the computer used should be sufficient to carry out the prediction of the behaviour of the machine to be imitated. It should also have sufficient speed. Our present computers probably have not got the necessary storage capacity, though they may well have the speed. This means in effect that if we wish to imitate anything so complicated as the human brain we need a very much larger machine than any of the computers at present available. We probably need something at least a hundred times as large as the Manchester Computer. Alternatively of course a machine of equal size or smaller would do if sufficient progress were made in the technique of storing information.

It should be noticed that there is no need for there to be any increase in the complexity of the computers used. If we try to imitate ever more complicated machines or brains we must use larger and larger computers to do it. We do not need to use successively more complicated ones. This may appear paradoxical, but the explanation is not difficult. The imitation of a machine by a computer requires not only that we should have made the computer, but that we should have programmed it appropriately. The more complicated the machine to be imitated the more complicated must the programme be.

This may perhaps be made clearer by an analogy. Suppose two men both wanted to write their autobiographies, and that one had had an eventful life, but very little had happened to the other. There would be two difficulties troubling the man with the more eventful life more seriously than the other. He would have to spend more on paper and he would have to take more trouble over thinking what to say. The supply of paper would not be likely to be a serious difficulty, unless for instance he were on a desert island, and in any case it could only be a technical or a financial problem. The other difficulty would be more fundamental and would become more serious still if he were not writing his life but a work on something he knew nothing about, let us say about family life on Mars. Our problem of programming a computer to behave like a brain is something like trying to write this treatise on a desert island. We cannot get the storage capacity we need: in other words we cannot get enough paper to write the treatise on, and in any case we don’t know what we should write down if we had it. This is a poor state of affairs, but, to continue the analogy, it is something to know how to write, and to appreciate the fact that most knowledge can be embodied in books.

In view of this it seems that the wisest ground on which to criticise the description of digital computers as ‘mechanical brains’ or ‘electronic brains’ is that, although they might be programmed to behave like brains, we do not at present know how this should be done. With this outlook I am in full agreement. It leaves open the question as to whether we will or will not eventually succeed in finding such a programme. I, personally, am inclined to believe that such a programme will be found. I think it is probable for instance that at the end of the century it will be possible to programme a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine. I am imagining something like a viva-voce examination, but with the questions and answers all typewritten in order that we need not consider such irrelevant matters as the faithfulness with which the human voice can be imitated. This only represents my opinion; there is plenty of room for others.

There are still some difficulties. To behave like a brain seems to involve free will, but the behaviour of a digital computer, when it has been programmed, is completely determined. These two facts must somehow be reconciled, but to do so seems to involve us in an age-old controversy, that of ‘free will and determinism’. There are two ways out. It may be that the feeling of free will which we all have is an illusion. Or it may be that we really have got free will, but yet there is no way of telling from our behaviour that this is so. In the latter case, however well a machine imitates a man’s behaviour it is to be regarded as a mere sham. I do not know how we can ever decide between these alternatives but whichever is the correct one it is certain that a machine which is to imitate a brain must appear to behave as if it had free will, and it may well be asked how this is to be achieved. One possibility is to make its behaviour depend on something like a roulette wheel or a supply of radium. The behaviour of these may perhaps be predictable, but if so, we do not know how to do the prediction.

It is, however, not really even necessary to do this. It is not difficult to design machines whose behaviour appears quite random to anyone who does not know the details of their construction. Naturally enough the inclusion of this random element, whichever technique is used, does not solve our main problem, how to programme a machine to imitate a brain, or as we might say more briefly, if less accurately, to think. But it gives us some indication of what the process will be like. We must not always expect to know what the computer is going to do. We should be pleased when the machine surprises us, in rather the same way as one is pleased when a pupil does something which he had not been explicitly taught to do.

Let us now reconsider Lady Lovelace’s dictum. ‘The machine can do whatever we know how to order it to perform.’ The sense of the rest of the passage is such that one is tempted to say that the machine can only do what we know how to order it to perform. But I think this would not be true. Certainly the machine can only do what we do order it to perform, anything else would be a mechanical fault. But there is no need to suppose that, when we give it its orders we know what we are doing, what the consequences of these orders are going to be. One does not need to be able to understand how these orders lead to the machine’s subsequent behaviour, any more than one needs to understand the mechanism of germination when one puts a seed in the ground. The plant comes up whether one understands or not. If we give the machine a programme which results in its doing something interesting which we had not anticipated I should be inclined to say that the machine had originated something, rather than to claim that its behaviour was implicit in the programme, and therefore that the originality lies entirely with us.

I will not attempt to say much about how this process of ‘programming a machine to think’ is to be done. The fact is that we know very little about it, and very little research has yet been done. There are plentiful ideas, but we do not yet know which of them are of importance. As in the detective stories, at the beginning of the investigation any trifle may be of importance to the investigator. When the problem has been solved, only the essential facts need to be told to the jury. But at present we have nothing worth putting before a jury. I will only say this, that I believe the process should bear a close relation of that of teaching.

I have tried to explain what are the main rational arguments for and against the theory that machines could be made to think, but something should also be said about the irrational arguments. Many people are extremely opposed to the idea of a machine that thinks, but I do not believe that it is for any of the reasons that I have given, or any other rational reason, but simply because they do not like the idea. One can see many features which make it unpleasant. If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat. This is a theoretical possibility which is hardly controversial, but we have lived with pigs and rats for so long without their intelligence much increasing, that we no longer trouble ourselves about this possibility. We feel that if it is to happen at all it will not be for several million years to come. But this new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety.

It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. It might for instance be said that no machine could write good English, or that it could not be influenced by sex-appeal or smoke a pipe. I cannot offer any such comfort, for I believe that no such bounds can be set. But I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual characteristics such as the shape of the human body; it appears to me to be quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. Attempts to produce a thinking machine seem to me to be in a different category. The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.


n5321 | 2025年6月30日 00:01

Add 404 and 500 page

404 page 是页面不存在

500 page 是你的后台代码或模板中出现了运行时错误,而你没有处理它,Django 默认就返回了一个“服务器内部错误”的页面。

控制是通过setting.py 中的debug=false or true

实现形式:

  1. 准备404page and 500 page

  2. settings.py 中

    1. X  DEBUG = True 静态语句改动态语句 
    2. DEBUG = os.environ.get('DJANGO_DEBUG', '') != 'False'
    3. 设置environment值。

    4. bash复制编辑# Linux/macOS
      export DJANGO_DEBUG=False
      python manage.py runserver

      # Windows CMD
      set DJANGO_DEBUG=False
      python manage.py runserver

      # Windows PowerShell
      $env:DJANGO_DEBUG = "False"
      python manage.py runserver
    5. windows内以上设置OK。ubuntu需要改gunicorn 配置文件。

      1. 编辑 Gunicorn 的 systemd 服务文件

      2. 找到 [Service] 部分,添加:Environment="DJANGO_DEBUG=False"

    6. project directory添加views.py , 添加

      1. from django.shortcuts import render

        def custom_404(request, exception):
          return render(request, 'home/page_error_404.html', status=404)
    7. project urls.py 中添加语句:

      1. handler404 = 'mysite.views.custom_404'
    8. 500page的做法更加单。page命名为500.html ,直接放在template directory下面。(系统自动搜索)


最终效果



n5321 | 2025年6月27日 16:52

Accounts App

rethinking multiuser site

试用了一下以前写的signup page。全是surprise

1. 注册不成功!换了几个注册名,最后终于搞了一个成功了。

2.注册成功之后需要通过邮箱激活。最后终于通过邮箱激活了。but。中间细节的logic几乎全部都忘了。

激活mail看上去还是漂亮的:

multuser的问题稍微排到后面一点去解决!

test driven design的问题倒是可以先好好想一想!




n5321 | 2025年6月26日 16:09

temp0626

git 问题

背景:

为了track request,对db做了一个拆分。把tracking user request项的东西拆到另外一个db,命名track.sqlite3里面。

然后再gitignore把这个db添加进去。目的是在开发环境和生产环境都有track.sqlite3的db,但是数据不sync.

因为这个db原来是track 过的,现在突然不track了!所以remove cache,可是不记得那个step有delte 这个cache的步骤。

总之在原来动作的时候,又把它恢复过来了!

现在为了添加新的homepage,然后想要try git branch

就新添了一个branch newHome

在site上面增删了一些东西。

感觉意义不大。想要回到master,merge newHome。merge以后就问题出来了!

track.sqlte3找不到了!

为什么?!

它中间delete track.sqlite3一次,就没辙了!


n5321 | 2025年6月26日 00:53

fix db manage bug

暂时用的sqlite

pycharm下面有plugin——Database Tools and SQL。

之前一直挺好用。突然有一天db打不开了,持续报警!

Driver class 'org.slf4j.LoggerFactory' not found

莫名其妙!

Project A里面是好的,Project B就不行!

尝试了大半天,搞清楚logic!

“Database Tools and SQL” 插件通过 JDBC 管理数据库连接。里面三个选项,DB source Drivers and DDL Mappings

在Data Sources 的general tab,可以选择driver

Drivers里面可以配置。 配置了SQLite

问题出在Drivers Files之中。

它需要添加Custorm jars and library path

Driver class 'org.slf4j.LoggerFactory' not found需要缺jar包的问题。实际就要求添加jar包:

slf4j-api-2.0.9.jar and slf4j-simple-2.0.9.jar包

原来的try fix 过一次,添加的slf4j-api-2.0.9.jar,但是把它当做了library path。结果就是在“Database Tools and SQL”不能manage db!

OK 版本!

img


2023版的pycharm,为什么一关掉就提示closing project,而且等好久窗口都还在?

pycharm 主页 Help -> Find Action -> 输入 Registry -> 禁用ide.await.scope.completion


n5321 | 2025年6月24日 22:59

Failed Promises

For some time now, many of the most prominent and colorful pages in Mechanical Engineering magazine have been filled by advertisements for computer software. However, there is a difference between the most recent ads and those of just a few years earlier. In 1990, for example, many software developers emphasized the reliability and ease of use of their packages, with one declaring itself the “most reliable way to take the heat, handle the pressure, and cope with the stress” while another promised to provide “trusted solutions to your design challenges.”

More recent advertising copy is a bit more subdued, with fewer implied promises that the software is going to do the work of the engineer—or take the heat or responsibility. The newer message is that the buck stops with the engineer. Software packages might provide “the right tool for the job,” but the engineer works the tool. A sophisticated system might be “the ultimate testing ground for your ideas,” but the ideas are no longer the machine’s, they are the engineer’s. Options may abound in software packages, but the engineer makes a responsible choice. This is as it should be, of course, but things are not always as they should be, and that is no doubt why there have been subtle and sometimes not-so-subtle changes in technical software marketing and its implied promises. Civil Engineering has also run software advertisements, albeit less prominent and colorful ones. Their messages, explicit or implicit, are more descriptive than promising. Nevertheless, the advertisements also contain few caveats about limitations, pitfalls, or downright errors that might be encountered in using prepackaged, often general-purpose software for a specific engineering design or analysis. The implied optimism of the software advertisements stands in sharp contrast to the concerns about the use of software that have been expressed with growing frequency in the pages of the same engineering magazines. The American Society of Civil Engineers, publisher of Civil Engineering and a host of technical journals and publications full of theoretical and applied discussions of computers and their uses, has among its many committees one on “guidelines for avoiding failures caused by misuse of civil engineering software.” The committee’s parent organization, the Technical Council on Forensic Engineering, was the sponsor of a cautionary session on computer use at the society’s 1992 annual meeting, and one presenter titled his paper, “Computers in Civil Engineering: A Time Bomb!” In simultaneous sessions at the same meeting, other equally fervid engineers were presenting computer-aided designs and analyses of structures of the future. There is no doubt that computer-aided design, manufacturing, and engineering have provided benefits to the profession and to humankind. Engineers are attempting and completing more complex and time-consuming analyses that involve many steps (and therefore opportunities for error) and that might not have been considered practicable in slide-rule days. New hardware and software have enabled more ambitious and extensive designs to be realized, including some of the dramatic structures and ingenious machines that characterize the late twentieth century. Today’s automobiles, for example, possess better crashworthiness and passenger protection because of advanced finite-element modeling, in which a complex structure such as a stylish car body is subdivided into more manageable elements, much as we might construct a gracefully curving walkway out of a large number of rectilinear bricks. For all the achievements made possible by computers, there is growing concern in the engineering-design community that there are numerous pitfalls that can be encountered using software packages. All software begins with some fundamental assumptions that translate to fundamental limitations, but these are not always displayed prominently in advertisements. Indeed, some of the limitations of software might be equally unknown to the vendor and to the customer. Perhaps the most damaging limitation is that it can be misused or used inappropriately by an inexperienced or overconfident engineer. The surest way to drive home the potential dangers of misplaced reliance on computer software is to recite the incontrovertible evidence of failures of structures, machines, and systems that are attributable to use or misuse of software. One such incident occurred in the North Sea in August 1991, when the concrete base of a massive Norwegian oil platform, designated Sleipner A, was being tested for leaks and mechanical operation prior to being mated with its deck. The base of the structure consisted of two dozen circular cylindrical reinforced-concrete cells. Some of the cells were to serve as drill shafts, others as storage tanks for oil, and the remainder as ballast tanks to place and hold the platform on the sea bottom. Some of the tanks were being filled with water when the operators heard a loud bang, followed by significant vibrations and the sound of a great amount of running water. After eight minutes of trying to control the water intake, the crew abandoned the structure. About eighteen minutes after the first bang was heard, Sleipner A disappeared into the sea, and forty-five seconds later a seismic event that registered a 3 on the Richter scale was recorded in southern Norway. The event was the massive concrete base striking the sea floor. An investigation of the structural design of Sleipner A’s base found that the differential pressure on the concrete walls was too great where three cylindrical shells met and left a triangular void open to the full pressure of the sea. It is precisely in the vicinity of such complex geometry that computer-aided analysis can be so helpful, but the geometry must be modeled properly. Investigators found that “unfavorable geometrical shaping of some finite elements in the global analysis … in conjunction with the subsequent post-processing of the analysis results … led to underestimation of the shear forces at the wall supports by some 45%.” (Whether or not due to the underestimation of stresses, inadequate steel reinforcement also contributed to the weakness of the design.) In short, no matter how sound and reliable the software may have been, its improper and incomplete use led to a structure that was inadequate for the loads to which it was subjected. In its November 1991 issue, the trade journal Offshore Engineer reported that the errors in analysis of Sleipner A “should have been picked up by internal control procedures before construction started.” The investigators also found that “not enough attention was given to the transfer of experience from previous projects.” In particular,trouble with an earlier platform, Statfjord A, which suffered cracking in the same critical area, should have drawn attention to the flawed detail. (A similar neglect of prior experience occurred, of course, just before the fatal Challenger accident, when the importance of previous O-ring problems was minimized.) Prior experience with complex engineering systems is not easily built into general software packages used to design advanced structures and machines. Such experience often does not exist before the software is applied, and it can be gained only by testing the products designed by the software. A consortium headed by the Netherlands Foundation for the Coordination of Maritime Research once scheduled a series of full-scale collisions between a single- and a double-hulled ship “to test the [predictive] validity of computer modelling analysis and software.” Such drastic measures are necessary because makers and users of software and computer models cannot ignore the sine qua non of sound engineering—broad experience with what happens in and what can go wrong in the real world. Computer software is being used more and more to design and control large and complex systems, and in these cases it may not be the user who is to blame for accidents. Advanced aircraft such as the F-22 fighter jet employ on-board computers to keep the plane from becoming aerodynamically unstable during maneuvers. When an F-22 crashed during a test flight in 1993, according to a New York Times report, “a senior Air Force official suggested that the F-22’s computer might not have been programmed to deal with the precise circumstances that the plane faced just before it crash-landed.” What the jet was doing, however, was not unusual for a test flight. During an approach about a hundred feet above the runway, the afterburners were turned on to begin an ascent—an expected maneuver for a test pilot—when “the plane’s nose began to bob up and down violently.” The Times reported the Air Force official as saying, “It could have been a computer glitch, but we just don’t know.” Those closest to questions of software safety and reliability worry a good deal about such “fly by wire” aircraft. They also worry about the growing use of computers to control everything from elevators to medical devices. The concern is not that computers should not control such things, but rather that the design and development of the software must be done with the proper checks and balances and tests to ensure reliability as much as is humanly possible. A case study that has become increasingly familiar to software designers unfolded during the mid-1980s, when a series of accidents plagued a high-powered medical device, the Therac-25. The Therac-25 was designed by Atomic Energy of Canada Limited (AECL) to accelerate and deliver a beam of electrons at up to 25 mega-electron-volts to destroy tumors embedded in living tissue. By varying the energy level of the electrons, tumors at different depths in the body could be targeted without significantly affecting surrounding healthy tissue, because beams of higher energy delivered the maximum radiation dose deeper in the body and so could pass through the healthy parts. Predecessors of the Therac-25 had lower peak energies and were less compact and versatile. When they were designed in the early 1970s, various protective circuits and mechanical interlocks to monitor radiation prevented patients from receiving an overdose. These earlier machines were later retrofitted with computer control, but the electrical and mechanical safety devices remained in place. Computer control was incorporated into the Therac-25 from the outset. Some safety features that had depended on hardware were replaced with software monitoring. “This approach,” according to Nancy Leveson, a leading software safety and reliabilty expert, and a student of hers, Clark Turner, “is becoming more common as companies decide that hardware interlocks and backups are not worth the expense, or they put more faith (perhaps misplaced) on software than on hardware reliability.” Furthermore, when hardware is still employed, it is often controlled by software. In their extensive investigation of the Therac-25 case, Leveson and Turner recount the device’s accident history, which began in Marietta, Georgia. On June 3, 1985, at the Kennestone Regional Oncology Center, the Therac-25 was being used to provide follow-up radiation treatment for a woman who had undergone a lumpectomy. When she reported being burned, the technician told her it was impossible for the machine to do that, and she was sent home. It was only after a couple of weeks that it became evident the patient had indeed suffered a severe radiation burn. It was later estimated she received perhaps two orders of magnitude more radiation than that normally prescribed. The woman lost her breast and the use of her shoulder and arm, and she suffered great pain. About three weeks after the incident in Georgia, another woman was undergoing Therac-25 treatment at the Ontario Cancer Foundation for a carcinoma of the cervix when she complained of a burning sensation. Within four months she died of a massive radiation overdose. Four additional cases of overdose occurred, three resulting in death. Two of these were at the Yakima Valley Memorial Hospital in Washington, in 1985 and 1987, and two at the East Texas Cancer Center, in Tyler, in March and April 1986. These latter cases are the subject of the title tale of a collection of horror stories on design, technology, and human error, Set Phasers on Stun, by Steven Casey. Leveson and Turner relate the details of each of the six Therac-25 cases, including the slow and sometimes less-than-forthright process whereby the most likely cause of the overdoses was uncovered. They point out that “concluding that an accident was the result of human error is not very helpful and meaningful,” and they provide an extensive analysis of the problems with the software controlling the machine. According to Leveson and Turner, “Virtually all complex software can be made to behave in an unexpected fashion under certain conditions,” and this is what appears to have happened with the Therac-25. Although they admit that to the day of their writing “some unanswered questions” remained, Leveson and Turner report in considerable detail what appears to have been a common feature in the Therac-25 accidents. The parameters for each patient’s prescribed treatment were entered at the computer keyboard and displayed on the screen before the operator. There were two fundamental modes of treatment, X ray (employing the machine’s full 25 mega-electron-volts) and the relatively low-power electron beam. The first was designated by typing in an “x” and the latter by an “e.” Occasionally, and evidently in at least some if not all of the accident cases, the Therac operator mistyped an “x” for an “e,” but noticed the error before triggering the beam. An “edit” of the input data was performed by using the “arrow up” key to move the cursor to the incorrect entry, changing it, and then returning to the bottom of the screen, where a “beam ready” message was the operator’s signal to enter an instruction to proceed, administering the radiation dose. Unfortunately, in some cases the editing was done so quickly by the fast-typing operators that not all of the machine’s functions were properly reset before the treatment was triggered. Exactly how much overdose was administered, and thus whether it was fatal, depended upon the installation, since “the number of pulses delivered in the 0.3 second that elapsed before interlock shutoff varied because the software adjusted the start-up pulse-repetition frequency to very different values on different machines.”

Anomalous, eccentric, sometimes downright bizarre, and always unexpected behavior of computers and their software is what ties together the horror stories that appear in each issue of Software Engineering Notes, an “informal newsletter” published quarterly by the Association for Computing Machinery. Peter G. Neumann, chairman of the ACM Committee on Computers and Public Policy, is the moderator of the newsletter’s regular department, “Risks to the Public in Computers and Related Systems,” in which contributors pass on reports of computer errors and glitches in applications ranging from health care systems to automatic teller machines. Neumann also writes a regular column, “Inside Risks,” for the magazine Communications of the ACM, in which he discusses some of the more generic problems with computers and software that prompt the many horror tales that get reported in newspapers, magazines, and professional journals and on electronic bulletin boards. Unfortunately, a considerable amount of the software involved in computer-related failures and malfunctions reported in such forums is produced anonymously, packaged in a black box, and poorly documented. The Therac-25 software, for example, was designed by a programmer or programmers about whom no information was forthcoming, even during a lawsuit brought against AECL. Engineers and others who use such software might reflect upon how contrary to normal scientific and engineering practice its use can be. Responsible engineers and scientists approach new software, like a new theory, with healthy skepticism. Increasingly often, however, there is no such skepticism when the most complicated of software is employed to solve the most complex problems. No software can ever be proven with absolute certainty to be totally error-free, and thus its design, construction, and use should be approached as cautiously as that of any major structure, machine, or system upon which human lives depend. Although the reputation and track record of software producers and their packages can be relied upon to a reasonable extent, good engineering involves checking them out. If the black box cannot be opened, a good deal of confidence in it and understanding of its operation can be inferred by testing. The proof tests to which software is subjected should involve the simple and ordinary as well as the complex and bizarre. A lot more might be learned about a finite-element package, for example, by solving a problem whose solution is already known rather than by solving one whose answer is unknown. In the former case, something might be inferred about the limitations of the black box; in the latter, the output from the black box might bedazzle rather than enlighten. In the final analysis it is the proper attention to detail—in the human designer’s mind as well as in the computer software—that causes the most complex and powerful applications to work properly. A fundamental activity of engineering and science is making promises in the form of designs and theories, so it is not fair to discredit computer software solely on the basis that it promises to be a reliable and versatile problem-solving tool or trusted machine operator. Nevertheless, users should approach all software with prudent caution and healthy skepticism, for the history of science and engineering, including the still-young history of software engineering, is littered with failed promises.


n5321 | 2025年6月19日 07:03

Diss CAE

Hacker News new | past | comments | ask | show | jobs | submitlogin

I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.

Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.

The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?




I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.

They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.

I've been using nastran for half as long but it isn't much better. It's all sales.

I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.

But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.


I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.

Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.

LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.


I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.


> Still no viable open-source solution.

For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.

[1]: https://fenicsproject.org/

[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...


Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.

So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?


To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].

[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...


Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.

But, I guess you get what you pay for in this space still.

[1]: https://gmsh.info/

[2]: https://github.com/qingfengxia/Cfd

[3]: https://github.com/qingfengxia/FenicsSolver

[3]: https://wiki.freecad.org/FEM_Solver


You can export other CAD meshes for use in it


> For hobbyists, I assume we need this stack.

Just curious what kind of hobby leads to a finite element analysis?


Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.

Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.


There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc.. Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.

[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]


comes armchair experts and the potential for insufferability

Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.

By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.

As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).

And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.

Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.


IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.


FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.


COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.

Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.


> The only new player seems to be COMSOL

Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.


The COMSOL company was started in 1986....


It used to be called FEMLAB :)

But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.


I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.

It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.

The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.

I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".


I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.


Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.

There might be better programs for some problems, but COMSOL is quite nice.


OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline

Happy to hear if people have good resources!


Still no viable open-source solution.

Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?

I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.


There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.


These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.


Abaqus is pretty big too. I've worked with both Ansys and Abaqus and I generally prefer the latter.


Abaqus is up there with Ansys aswell as others have mentioned.


As a recovering fe modeler, I understand completely.


I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: 


n5321 | 2025年6月15日 23:43

Diss: Eighty Years of the Finite Element Method (2022)

Hacker News new | past | comments | ask | show | jobs | submitlogin
Eighty Years of the Finite Element Method (2022) (springer.com)
203 points by sandwichsphinx 7 months ago hide | past | favorite | 102 comments



I've been a full-time FEM Analyst for 15 years now. It's generally a nice article, though in my opinion paints a far rosier picture of the last couple decades than is warranted.

Actual, practical use of FEM has been stagnate for quite some time. There have been some nice stability improvements to the numerical algorithms that make highly nonlinear problems a little easier; solvers are more optimized; and hardware is of course dramatically more capable (flash storage has been a godsend).

Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems. They have some nice results on the world's simplest "laboratory" problem, but accuracy is abysmal on most real-world problems - e.g. it might give good results on a cylinder in simple tension, but fails horribly when adding bending.

There's still nothing better, but looking back I'm pretty surprised I'm still basically doing things the same way I was as an Engineer 1; and not for lack of trying. I've been on countless development projects that seem promising but just won't validate in the real world.

Industry focus has been far more on Verification and Validation (ASME V&V 10/20/40) which has done a lot to point out the various pitfalls and limitations. Academic research and the software vendors haven't been particularly keen to revisit the supposedly "solved" problems we're finding.


I'm a mechanical engineer, and I've been wanting to better understand the computational side of the tools I use every day. Do you have any recommendations for learning resources if one wanted to "relearn" FEA from a computer science perspective?


I learned it for the first time from this[0] course; part of the course covers deal.ii[1] where you program the stuff you're learning in C++.

[0]: https://open.umich.edu/find/open-educational-resources/engin...

[1]: https://www.dealii.org/


Start with FDM. Solve Bernoulli deflection of a beam


Have a look at FEniCs to start with.


>Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems

Even Arnold's work? FEEC seemed quite promising last time I was reading about it, but never seemed to get much traction in the wider FEM world.


I kind of thought Neural Operators were slotting into the some problem domains where FEM is used (based on recent work in weather modelling, cloth modelling, etc) and thought there was some sort of FEM -> NO lineage. Did I completely misunderstand that whole thing?


Those are definitely up next in the flashy-new-thing pipeline and I'm not that up to speed on them yet.

Another group within my company is evaluating them right now and the early results seems to be "not very accurate, but directionally correct and very fast" so there may be some value in non-FEM experts using them to quickly tell if A or B is a better design; but will still need a more proper analysis in more accurate tools.

It's still early though and we're just starting to see the first non-research solvers hitting the market.


Very curious, we are getting good results with PiNN and operators, what's your domain?


I was under the impression that the linear systems that come out of FEM methods are in some cases being solved by neural networks (or partially, e.g. as a preconditioner in an iterative scheme), but I don't know the details.


stagnate last 15 years??? Contact elements, bolt preload, modeling individual composite fibers, delamination progressive ply failure, modeling layers of material to a few thousandths of an inch. Design optimization. ANSYS Workbench = FEA For Dummies. The list goes on.


Have you heard of physics informed neural nets?

It seems like a hot candidate to potentially yield better results in the future


Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere? The article would start with the simplest possible method, something that could be implemented in short C program, and would continue with a progressively more accurate and complex methods.


If you are interested in this, I'd recommend following an openfoam tutorial, c++ though.

You could do SWE with finite elements, but generally finite volumes would be your choice to handle any potential discontinuities and is more stable and accurate for practical problems.

Here is a tutorial. https://www.tfd.chalmers.se/~hani/kurser/OS_CFD_2010/johanPi...


I'm looking for something like this, but more advanced. The common problem with such tutorials is that they stop with the simplest geometry (square) and the simplest finite difference method.

What's unclear to me is how do I model the spherical geometry without exploding the complexity of the solution. I know that a fully custom mesh with a pile of formulas for something like beltrami-laplace operator would work, but I want something more elegant than this. For a example, can I use the Fibbonacci spiral to generate a uniform spherical mesh, and then somehow compute gradients and the laplacian?

I suspect that the stability of FE or FV methods is rooted in the fact that the FE functions slightly overlap, so computing the next step is a lot like using an implicit FD scheme, or better, a variation of the compact FD scheme. However I'm interested in how an adept in the field would solve this problem in practice. Again, I'm aware that there are methods of solving such systems (Jacobi, etc.), but those make the solution 10x more complex, buggier and slower.


Interesting that this reads almost like an chatgpt prompt.


Lazy people have been lazy forever. I stumbled across an example of this the other day from the 1990s, I think, and was shocked how much the student emails sounded like LLM prompts: https://www.chiark.greenend.org.uk/~martinh/poems/questions....


At least those had some basic politeness. So often I'm blown away not only how people blithely write "I NEED HELP, GIMME XYZ NOW NERDS" but especially how everyone is just falling over themselves to actually help! WTF?

Basic politeness is absolutely dead, nobody has any concept of acknowledging they are asking for a favour; we just blast Instagram/TikTok reels at top volume and smoke next to children and elderly in packed public spaces etc. I'm 100% sure it's not rose-tinted memories of the 90s making me think, it wasn't always like this...


It reminds me of the old joke that half of the students are below average…


Expect in Lake Woebegone, all of the children are above average


But that's not true, unless by "average" you mean the median.


Normally, it's all the same.


Only if the distribution has zero skewness.

Unless "normally" you mean the normal distribution, which indeed has zero skewness.


Yes, it was a admittedly bad pun.


> Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere?

Typically, Finite Volume Method is used for fluid flow problems. It is possible to use Finite Element Methods, but it is rare.


"As an AI language model, I am happy to comply with your request ( https://chatgpt.com/share/6727b644-b2e0-800b-b613-322072d9d3... ), but good luck finding a data set to verify it, LOL."


I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.

Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.

The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?


I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.

They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.

I've been using nastran for half as long but it isn't much better. It's all sales.

I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.

But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.


I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.

Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.

LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.


I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.


> Still no viable open-source solution.

For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.

[1]: https://fenicsproject.org/

[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...


Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.

So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?


To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].

[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...


Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.

But, I guess you get what you pay for in this space still.

[1]: https://gmsh.info/

[2]: https://github.com/qingfengxia/Cfd

[3]: https://github.com/qingfengxia/FenicsSolver

[3]: https://wiki.freecad.org/FEM_Solver


You can export other CAD meshes for use in it


> For hobbyists, I assume we need this stack.

Just curious what kind of hobby leads to a finite element analysis?


Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.

Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.


There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc.. Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.

[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]


comes armchair experts and the potential for insufferability

Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.

By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.

As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).

And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.

Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.


IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.


FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.


COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.

Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.


> The only new player seems to be COMSOL

Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.


The COMSOL company was started in 1986....


It used to be called FEMLAB :)

But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.


I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.

It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.

The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.

I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".


I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.


Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.

There might be better programs for some problems, but COMSOL is quite nice.


OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline

Happy to hear if people have good resources!


Still no viable open-source solution.

Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?

I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.


There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.


These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.


Abaqus is pretty big too. I've worked with both Ansys and Abaqus and I generally prefer the latter.


Abaqus is up there with Ansys aswell as others have mentioned.


As a recovering fe modeler, I understand completely.


I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.


During my industrial PhD, I created an Object-Oriented Programming (OOP) framework for Large Scale Air-Pollution (LSAP) simulations.

The OOP framework I created was based on Petrov-Galerkin FEM. (Both proper 2D and "layered" 3D.)

Before my PhD work, the people I worked with (worked for) used spectral methods and Alternate-direction FEM (i.e. using 1D to approximate 2D.)

In some conferences and interviews certain scientists would tell me that programming FEM is easy (for LSAP.) I always kind of agree and ask how many times they have done it. (For LSAP or anything else.) I was not getting an answer from those scientists...

Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.


> Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.

FEM at it's core ends up being just a technique to find approximate solutions to problems expressed with partial differential equations.

Finding solutions to practical problems that meet both boundary conditions and domain is practically impossible to have with analytical methods. FEM trades off correctness with an approximation that can be exact in prescribed boundary conditions but is an approximation in both how domains are expressed and the solution,and has nice properties such as the approximation errors converging to the exact solution by refining the approximation. This means exponentially larger computational budgets.


I also studied FEM in undergrad and grad school. There's something very satisfying about breaking an intractably difficult real-world problem up into finite chunks of simplified, simulated reality and getting a useful, albeit explicitly imperfect, answer out of the other end. I find myself thinking about this approach often.


A 45 comment thread at the time https://news.ycombinator.com/item?id=33480799


Predicting how things evolve in space-time is a fundamental need. Finite element methods deserve the glory of a place at the top of the HN list. I opted for "orthogonal collocation" as the method of choice for my model back in the day because it was faster and more fitting to the problem at hand. A couple of my fellow researchers did use FEM. It was all the rage in the 90s for sure.


From "Chaos researchers can now predict perilous points of no return" (2022) https://news.ycombinator.com/item?id=32862414 :

FEM: Finite Element Method: https://en.wikipedia.org/wiki/Finite_element_method

>> FEM: Finite Element Method (for ~solving coupled PDEs (Partial Differential Equations))

>> FEA: Finite Element Analysis (applied FEM)

awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea

And also, "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171


Interesting perspective. I just attended an academic conference on isogeometric analysis (IGA), which is briefly mentioned in this article. Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community. IGA has a lot of potential to solve many of the pain points of FEM. It has better convergence rates in general, allows for better timesteps in explicit solvers, has better methods to ensure stability in, e.g., incompressible solids, and perhaps most exciting, enables an immersed approach, where the problem of meshing is all but gone as the geometry is just immersed in a background grid that is easy to mesh. There is still a lot to be done to drive adoption in industry, but this is likely the future of FEM.


> IGA has a lot of potential to solve many of the pain points of FEM.

Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?

If I recall correctly convergence rates are exactly the same, but the whole approach fails to realize that, other than boundaries, geometry and the fields of quantities of interest do not have the same spatial distributions.

IGA has been around for ages, and never materialized beyond the "let's reuse the CAD functions" trick, which ends up making the problem more complex without any tangible return when compared with plain old P-refinent. What is left in terms of potential?

> Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community.

I recall the name Tom Hughes. I have his FEM book and he's been for years (decades) the only one pushing the concept. The reason being that the whole computational mechanics community looked at it,found it interesting, but ultimately wasn't worth the trouble. There are far more interesting and promising ideas in FEM than using splines to build elements.


> Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?

That's how it started, yes. The splines used to specify the geometry are trimmed surfaces, and IGA has expanded from there to the use of splines generally as the shape functions, as well as trimming of volumes, etc. This use of smooth splines as shape functions improves the accuracy per degree of freedom.

> If I recall correctly convergence rates are exactly the same

Okay, looks like I remembered wrong here. What we do definitely see is that in IGA you get the convergence rates of higher degrees without drastically increasing your degree of freedom, meaning that there is better accuracy per degree of freedom for any degree above 1. See for example Figures 16 and 18 in this paper: https://www.researchgate.net/profile/Laurens-Coox/publicatio...

> geometry and the fields of quantities of interest do not have the same spatial distributions.

Using the same shape functions doesn't automatically mean that they will have the same spatial distributions. In fact, with hierarchical refinement in splines you can refine the geometry and any single field of interest separately.

> What is left in terms of potential?

The biggest potential other than higher accuracy per degree of freedom is perhaps trimming. In FEM, trimming your shape functions makes the solution unusable. In IGA, you can immerse your model in a "brick" of smooth spline shape functions, trim off the region outside, and run the simulation while still getting optimal convergence properties. This effectively means little to no meshing required. For a company that is readying this for use in industry, take a look at https://coreform.com/ (disclosure, I used to be a software developer there).


I took a course in undergrad, and was exposed to it in grad school again, and for the life of me I still don't understand the derivations either Galerkin or variational.


I learned from the structural engineering perspective. What are you struggling with? In my mind I have this logic flow: 1. strong form pde; 2. weak form; 3. discretized weak form; 4. compute integrals (numerically) over each element; 5. assemble the linear system; 6. solve the linear system.


Luckily the integrals of step 4 are already worked out in text books and research papers for all the problems people commonly use FEA for so you can almost always skip 1. 2. and 3.


Do you have any textbook recommendations for the structural engineering perspective?


For anyone interested in a contemporary implementation, SELF is a spectral element library in object-oriented fortran [1]. The devs here at Fluid Numerics have upcoming benchmarks on our MI300A system and other cool hardware.

[1] https://github.com/FluidNumerics/SELF


I have such a fondness for FEA. ANSYS and COSMOS were the ones I used, and I’ve written toy modelers and solvers (one for my HP 48g) and even tinkered with using GPUs for getting answers faster (back in the early 2000s).

Unfortunately my experience is that FEA is a blunt instrument with narrow practical applications. Where it’s needed, it is absolutely fantastic. Where it’s used when it isn’t needed, it’s quite the albatross.


My hot take is that, FEM is best used as unit testing of Machine Design, not a guide towards design that it’s often used as. The greatest mechanical engineer I know, once designed an entire mechanical wrist arm with five fingers, actuations, lots of parts and flexible finger tendon. He never used FEM at any part of his design. He instead did it in the old fashioned, design and fab a simple prototype, get a feel for it, use the tolerances you discovered in the next prototype and just keep iterating quickly. If I went to him and told him to model the flexor of his fingers in FEM, and then gave him a book to tell him how to correctly use the FEM software so that you got non “non-sensical” results I would have slowed him down if anything. Just build and you learn the tolerances, and the skill is in building many cheap prototypes to get the best idea of what the final expensive build will look like.


> The greatest mechanical engineer I know, [...]

And with that you wrote the best reply to your own comment. Great programmers of the past wrote amazing systems just in assembly. But you needed to be a great programmer just to get anything done at all.

Nowadays dunces like me can write reasonable software in high level languages with plenty of libraries. That's progress.

Similar for mechanical engineering.

(Doing prototypes etc might still be a good idea, of course. My argument is mainly that what works for the best engineers doesn't necessarily work for the masses.)


Also, might work for a mechanical arm the size of an arm, but not for the size of the Eiffel tower.


Eiffel Tower was built before FEM existed. In fact I doubt they even did FEM like calculations


This is true, although it was notable as an early application of Euler-Bernoulli beam theory in structural engineering, which helped to prove the usefulness of that method.


I ment a mechanical arm the size of the eifel tower. You don't want to iterate physical products at that size.


Going by Boeing vs. SpaceX, iteration seems to be the most effective approach to building robotic physical products the size of the Eiffel Tower.


I'm sure they are doing plenty of calculations beforehand, too.


Unquestionably! Using FEM.


Would FEM be useful for that kind problem? It's more for figuring out if your structure will take the load, where stress concentrations are, what happens with thermal expansion. FEM won't do much for figuring out what the tolerance need to be on intricate mechanisms


To be fair, FEM is not the right tool for mechanical linkage design (if anything, you'd use rigid body dynamics).

FEM is the tool you'd use to tell when and where the mechanical linkage assembly will break.


Garbage in garbage out. If you don't fully understand the model, then small parameter changes can create wildly different results. It's always good to go back to fundamentals and hand check a simplification to get a feel for how it should behave.


If he were designing a bridge, however ...


Its wrong to assume that everyone and every projects can use an iterative method with endless prototypes. Id you do I have a prototype bridge to sell you.


Good luck designing crash resilient structures without simulating it on FEM based software though.


The FEM is just a model of the crash resistant structure. Hopefully it will behave like the actual structure, but that is not guaranteed. We use the FEM because it is faster and cheaper than doing the tests on the actual thing. However if you have the time and money to do your crash resiliency tests on the actual product during the development phase. I expect the results would be much better.


Yes, with infinite time and budget you'd get much better results. That does not sound like an interesting proposition, though.


I’d guess most of the bridges in US were built before FEM existed


Anyone can design a bridge that holds up. Romans did it millenia ago.

Engineering is designing a bridge that holds up to a certain load, with the least amount of material and/or cost. FEM gives you tighter bounds on that.


The average age of a bridge in the US is about 40-50 years old and the title of the article has "80 years of FEM".

https://www.infrastructurereportcard.org/wp-content/uploads/...

I'd posit a large fraction were designed with FEM.


FEM runs on the same math and theories those bridges were designed on on paper.


They did this just fine until without such tools for the majority of innovation in the last century.


Having worked on the design of safety structures with mechanical engineers for a few projects, it is far, far cheaper to do a simulation and iterate over designs and situations than do that in a lab or work it out by hand. The type of stuff you can do on paper without FEM tends to be significantly oversimplified.

It doesn't replace things like actual tests, but it makes designing and understanding testing more efficient and more effective. It is also much easier to convince reviewers you've done your job correctly with them.

I'd argue computer simulation has been an important component a majority of mechanical engineering innovation in the last century. If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit. We did "just fine" without cars for the majority of humanity, but motorized vehicles significantly changed how we do things and changed the reach of what we can do.


> It is also much easier to convince reviewers you've done your job correctly with them.

In other words, the work that doesn't change the underlying reality of the product?

> We did "just fine" without cars for the majority of humanity

We went to the moon, invented aircraft, bridges, skyscrapers, etc, all without FEM. So that's why this is a bad comparison.

> If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit.

Of course. That's what they are accustomed to. 80/20 paper techniques that were replaced by SW were forgotten.

When tests are cheap, you make a lot of them. When they are expensive, you do a few and maximize the information you learn from them.

I'm not arguing FEM doesn't provide net benefit to the industry.


What is your actual assertion? That tools like FEA are needless frippery or that they just dumb down practitioners who could have otherwise accomplished the same things with hand methods? Something else? You're replying to a practicing mechanical engineer whose experience rings true to this aerospace engineer.

Things like modern automotive structural safety or passenger aircraft safety are leagues better today than even as recently as the 1980s because engineers can perform many high-fidelity simulations long before they get to integrated system test. When integrated system test is so expensive, you're not going to explore a lot of new ideas that way.

The argument that computational tools are eroding deep engineering understanding is long-standing, and has aspects of both truth and falsity. Yep, they designed the SR-71 without FEA, but you would never do that today because for the same inflation-adjusted budget, we'd expect a lot more out of the design. Tools like FEA are what help engineers fulfill those expectations today.


> What is your actual assertion?

That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."

Now what's my opinion? FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.

> passenger aircraft safety are leagues better today

Yep, but that's just restating the pros. Local iteration and testing.

> You're replying to a practicing mechanical engineer

Oh drpossum and I are getting to know each other.

I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.


Replying to finish a discussion no one will probably see, but...

> That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."

In refuting the original casually-worded blanket statement, yes, you're right. You can indeed design crash resilient structures without FEA. Especially if they are terrestrial (i.e., civil engineering).

In high-performance applications like aerospace vehicles (excluding general aviation) or automobiles, you will not achieve the required performance on any kind of acceptable timeline or budget without FEA. In these kinds of high-performance applications, the original statement is valid.

> FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.

Do you have any experience in aerospace applications? Because quite often, we reliably achieve structural efficiencies, at prescribed levels of robustness, that we would not achieve sans FEA. It's a matter of making the performance bar, not a matter of simple vs. complex solutions.

> I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.

That was one of his points, not the main one. The idea that its primary value is pandering to paper-pushing regulatory bodies and "policy based governance" is specious. Does it help with your certification case? Of course. But the real value is that analyses from these tools are the substantiation we use to determine the if the (expensive) design will meet requirements and survive all its stressing load cases before we approve building it. We then have a high likelihood of what we build, assuming it conforms to design intent, performing as expected.


Except that everything's gotten abysmally complex. Vehicle crash test experiments are a good example of validating the FEM simulation (yes that's the correct order, not vice versa)


How can you assert so confidently you know the cause and effect?

Certainly computers allow more complexity, so there is interplay between what it enables and what’s driven by good engineering.


FEM - because we can't solve PDEs!


Is it related to Galerkin?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4



n5321 | 2025年6月15日 23:31

How to get meaningful and correct results from your finite element model


Martin Bäker Institut für Werkstoffe, Technische Universität Braunschweig, Langer Kamp 8, D-38106 Braunschweig, martin.baeker@tu-bs.de November 15, 2018


Abstract

This document gives guidelines to set up, run, and postprocess correct simulations with the finite element method. It is not an introduction to the method itself, but rather a list of things to check and possible mistakes to watch out for when doing a finite element simulation.


The finite element method (FEM) is probably the most-used simulation technique in engineering. Modern finite-element software makes doing FE simulations easy – too easy, perhaps. Since you have a nice graphical user interface that guides you through the process of creating, solving, and postprocessing a finite element model, it may seem as if there is no need to know much about the inner workings of a finite element program or the underlying theory. However, creating a model without understanding finite elements is similar to flying an airplane without a pilot’s license. You may even land somewhere without crashing, but probably not where you intended to.

This document is not a finite element introduction; see, for example, [3, 7, 10] for that. It is a guideline to give you some ideas how to correctly set up, solve and postprocess a finite element model. The techniques described here were developed working with the program Abaqus [9]; however, most of them should be easily transferable to other codes. I have not explained the theoretical basis for most of them; if you do not understand why a particular consideration is important, I recommend studying finite element theory to find out.

1. Setting up the model

1.1 General considerations

These considerations are not restricted to finite element models, but are useful for any complex simulation method.

  • 1.1-1. Even if you just need some number for your design – the main goal of an FEA is to understand the system. Always design your simulations so that you can at least qualitatively understand the results. Never believe the result of a simulation without thinking about its plausibility.

  • 1.1-2. Define the goal of the simulation as precisely as possible. Which question is to be answered? Which quantities are to be calculated? Which conclusions are you going to draw from the simulation? Probably the most common error made in FE simulations is setting up a simulation without having a clear goal in mind. Be as specific as possible. Never set up a model “to see what happens” or “to see how stresses are distributed”.

  • 1.1-3. Formulate your expectations for the simulation result beforehand and make an educated guess of what the results should be. If possible, estimate at least some quantities of your simulation using simplified assumptions. This will make it easier to spot problems later on and to improve your understanding of the system you are studying.

  • 1.1-4. Based on the answer to the previous items, consider which effects you actually have to simulate. Keep the model as simple as possible. For example, if you only need to know whether a yield stress is exceeded somewhere in a metallic component, it is much easier to perform an elastic calculation and check the von Mises stress in the postprocessor (be wary of extrapolations, see 3.2-1) than to include plasticity in the model.

  • 1.1-5. What is the required precision of your calculation? Do you need an estimate or a precise number? (See also 1.4-1 below.)

  • 1.1-6. If your model is complex, create it in several steps. Start with simple materials, assume frictionless behaviour etc. Add complications step by step. Setting up the model in steps has two advantages: (i) if errors occur, it is much easier to find out what caused them; (ii) understanding the behaviour of the system is easier this way because you understand which addition caused which change in the model behaviour. Note, however, that checks you made in an early stage (for example on the mesh density) may have to be repeated later.

  • 1.1-7. Be careful with units. Many FEM programs (like ABAQUS) are inherently unit-free – they assume that all numbers you give can be converted without additional conversion factors. You cannot define you model geometry in millimeter, but use SI units without prefixes everywhere else. Be especially careful in thermomechanical simulations due to the large number of different physical quantities needed there. And of course, be also careful if you use antiquated units like inch, slug, or BTU.

1.2 Basic model definition

  • 1.2-1. Choose the correct type of simulation (static, quasi-static, dynamic, coupled etc.). Dynamic simulations require the presence of inertial forces (elastic waves, changes in kinetic energies). If inertial forces are irrelevant, you should use static simulations.

  • 1.2-2. As a rule of thumb, a simulation is static or quasi-static if the excitation frequency is less than 1/5 of the lowest natural frequency of the structure [2].

  • 1.2-3. In a dynamic analysis, damping may be required to avoid unrealistic multiple reflections of elastic waves that may affect the results [2].

  • 1.2-4. Explicit methods are inherently dynamic. In some cases, explicit methods may be used successfully for quasi-static problems to avoid convergence problems (see 2.1-9 below). If you use mass scaling in your explicit quasi-static analysis, carefully check that the scaling parameter does not affect your solution. Vary the scaling factor (the nominal density) to ensure that the kinetic energy in the model remains small [12].

  • 1.2-5. In a static or quasi-static analysis, make sure that all parts of the model are constrained so that no rigid-body movement is possible. (In a contact problem, special stabilization techniques may be available to ensure correct behaviour before contact is established.)

  • 1.2-6. If you are studying a coupled problem (for example thermo-mechanical) think about the correct form of coupling. If stresses and strains are affected by temperature but not the other way round, it may be more efficient to first calculate the thermal problem and then use the result to calculate thermal stresses. A full coupling of the thermal and mechanical problem is only needed if temperature affects stresses/strains (e. g., due to thermal expansion or temperature-dependent material problems) and if stresses and strains also affect the thermal problem (e. g., due to plastic heat generation or the change in shape affecting heat conduction).

  • 1.2-7. Every FE program uses discrete time steps (except for a static, linear analysis, where no time incrementation is needed). This may affect the simulation. If, for example, the temperature changes during a time increment, the material behaviour may strongly differ between the beginning and the end of the increment (this often occurs in creep problems where the properties change drastically with temperature). Try different maximal time increments and make sure that time increments are sufficiently small so that these effects are small.

  • 1.2-8. Critically check whether non-linear geometry is required. As a rule of thumb, this is almost always the case if strains exceed 5%. If loads are rotating with the structure (think of a fishing rod that is loaded in bending initially, but in tension after it has started to deform), the geometry is usually non-linear. If in doubt, critically compare a geometrically linear and non-linear simulation.

1.3 Symmetries, boundary conditions and loads

  • 1.3-1. Exploit symmetries of the model. In a plane 2D-model, think about whether plane stress, plane strain or generalized plane strain is the appropriate symmetry. (If thermal stresses are relevant, plane strain is almost always wrong because thermal expansion in the 3-direction is suppressed, causing large thermal stresses. Note that these 33-stresses may affect other stress components as well, for example, due to von Mises plasticity.) Keep in mind that the loads and the deformations must conform to the same symmetry.

  • 1.3-2. Check boundary conditions and constraints. After calculating the model, take the time to ensure that nodes were constrained in the desired way in the postprocessor.

  • 1.3-3. Point loads at single nodes may cause unrealistic stresses in the adjacent elements. Be especially careful if the material or the geometry is non-linear. If in doubt, distribute the load over several elements (using a local mesh refinement if necessary).

  • 1.3-4. If loads are changing direction during the calculation, non-linear geometry is usually required, see 1.2-8.

  • 1.3-5. The discrete time-stepping of the solution process may also be important in loading a structure. If, for example, you abruptly change the heat flux at a certain point in time, discrete time stepping may not capture the exact point at which the change occurs, see fig. 1. (Your software may use some averaging procedure to alleviate this.) Define load steps or use other methods to ensure that the time of the abrupt change actually corresponds to a time step in the simulation. This may also improve convergence because it allows to control the increments at the moment of abrupt change, see also 2.1-4.

1.4 Input data

  • 1.4-1. A simulation cannot be more precise than its input data allow. This is especially true for the material behaviour. Critically consider how precise your material data really are. How large are the uncertainties? If in doubt, vary material parameters to see how results are affected by the uncertainties.

  • 1.4-2. Be careful when combining material data from different sources and make sure that they are referring to identical materials. In metals, don’t forget to check the influence of heat treatment; in ceramics, powder size or the processing route may affect the properties; in polymers, the chain length or the content of plasticizers is important [13]. Carefully document your sources for material data and check for inconsistencies.

  • 1.4-3. Be careful when extrapolating material data. If data have been described using simple relations (for example a Ramberg-Osgood law for plasticity), the real behaviour may strongly deviate from this.

  • 1.4-4. Keep in mind that your finite element software usually cannot extrapolate material data beyond the values given. If plastic strains exceed the maximum value specified, usually no further hardening of the material will be considered. The same holds, for example, for thermal expansion coefficients which usually increase with temperature. Using different ranges in different materials may thus cause spurious thermal stresses. Fig. 2 shows an example.

  • 1.4-5. If material data are given as equations, be aware that parameters may not be unique. Frequently, data can be fitted using different parameters. As an illustration, plot the simple hardening law A+Bεⁿ with values (130, 100, 0.5) and (100, 130, 0.3) for (A, B, n), see fig. 3. Your simulation results may be indifferent to some changes in the parameters because of this.

  • 1.4-6. If it is not possible to determine material behaviour precisely, finite element simulations may still help to understand how the material behaviour affects the system. Vary parameters in plausible regions and study the answer of the system.

  • 1.4-7. Also check the precision of external loads. If loads are not known precisely, use a conservative estimate.

  • 1.4-8. Thermal loads may be especially problematic because heat transfer coefficients or surface temperatures may be difficult to measure. Use the same considerations as for materials.

  • 1.4-9. If you vary parameters (for example the geometry of your component or the material), make sure that you correctly consider how external loads are changed by this. If, for example, you specify an external load as a pressure, increasing the surface also increases the load. If you change the thermal conductivity of your material, the total heat flux through the structure will change; you may have to specify the thermal load accordingly.

  • 1.4-10. Frictional behaviour and friction coefficients are also frequently unknown. Critically check the parameters you use and also check whether the friction law you are using is correct – not all friction is Coulombian.

  • 1.4-11. If a small number of parameters are unknown, you can try to vary them until your simulation matches experimental data, possibly using a numerical optimization method. (This is the so-called inverse parameter identification [6].) Be aware that the experimental data used this way cannot be used to validate your model (see section 3.3).

1.5 Choice of the element type

Warning: Choosing the element type is often the crucial step in creating a finite element model. Never accept the default choice of your program without thinking about it.¹ Carefully check which types are available and make sure you understand how a finite element simulation is affected by the choice of element type. You should understand the concepts of element order and integration points (also known as Gauß points) and know the most common errors caused by an incorrectly chosen element type (shear locking, volumetric locking, hourglassing [1,3]).

The following points give some guidelines for the correct choice:

  • 1.5-1. If your problem is linear-elastic, use second-order elements. Reduced integration may save computing time without strongly affecting the results.

  • 1.5-2. Do not use fully-integrated first order elements if bending occurs in your structure (shear locking). Incompatible mode elements may circumvent this problem, but their performance strongly depends on the element shape [7].

  • 1.5-3. If you use first-order elements with reduced integration, check for hourglassing. Keep in mind that hourglassing may occur only in the interior of a three-dimensional structure where seeing it is not easy. Exaggerating the displacements may help in visualizing hourglassing. Most programs use numerical techniques to suppress hourglass modes; however, these may also affect results due to artificial damping. Therefore, also check the energy dissipated by this artificial damping and make sure that it is small compared to other energies in the model.

  • 1.5-4. In contact problems, first-order elements may improve convergence because if one corner and one edge node are in contact, the second-order interpolation of the element edge causes overlaps, see fig. 4. This may especially cause problems in a crack-propagation simulation with a node-release scheme [4, 11].

  • 1.5-5. Discontinuities in stresses or strains may be captured better with first-order elements in some circumstances.

  • 1.5-6. If elements distort strongly, first-order elements may be better than second-order elements.

  • 1.5-7. Avoid triangular or tetrahedral first-order elements since they are much too stiff, especially in bending. If you have to use these elements (which may be necessary in a large model with complex geometry), use a very fine mesh and carefully check for mesh convergence. Think about whether partitioning your model and meshing with quadrilateral/hexahedral elements (at least in critical regions) may be worth the effort. Fig. 5 shows an example where a very complex geometry has to be meshed with tetrahedral elements. Although the mesh looks reasonably fine, the system answer with linear elements is much too stiff.

  • 1.5-8. If material behaviour is incompressible or almost incompressible, use hybrid elements to avoid volumetric locking. They may also be useful if plastic deformation is large because (metal) plasticity is also volume conserving.

  • 1.5-9. Do not mix elements with different order. This can cause overlaps or gaps forming at the interface (possibly not shown by your postprocessor) even if there are no hanging nodes (see fig. 6). If you have to use different order of elements in different regions of your model, tie the interface between the regions using a surface constraint. Be aware that this interface may cause a discontinuity in the stresses and strains due to different stiffness of the element types.

  • 1.5-10. In principle, it is permissible to mix reduced and fully integrated elements of the same order. However, since they differ in stiffness, spurious stress or strain discontinuities may result.

  • 1.5-11. If you use shell or beam elements or similar, make sure to use the correct formulation. Shells and membranes look similar, but behave differently. Make sure that you use the correct mathematical formulation; there are a large number of different types of shell or beam elements with different behaviour.

¹The only acceptable exception may be a simple linear-elastic simulation if your program uses second-order elements. But if all you do is linear elasticity, this article is probably not for you.

1.6 Generating a mesh

  • 1.6-1. If possible, use quadrilateral/hexahedral elements. Meshing 3D-structures this way may be laborious, but it is often worth the effort (see also 1.5-7).

  • 1.6-2. A fine mesh is needed where gradients in stress and strain are large.

  • 1.6-3. A preliminary simulation with a coarse mesh may help to identify the regions where a greater mesh density is required.

  • 1.6-4. Keep in mind that the required mesh density depends on the quantities you want to extract and on the required precision. For example, displacements are often calculated more precisely than strains (or stresses) because strains involve derivatives, i.e. the differences in displacements between nodes.

  • 1.6-5. A mesh convergence study can be used to check whether the model behaves too stiff (as is often the case for fully integrated first-order elements, see fig. 5) or too soft (which happens with reduced-integration elements). Be careful in evaluating this study: If your model is load-controlled, evaluate displacements or strains to check for convergence, if it is strain-controlled, evaluate forces or stresses. (Stiffness relates forces to displacements, so to check for stiffness you need to check both.) If you use, for example, displacement control, displacements are not sensitive to the actual stiffness of your model since you prescribe the displacement.

  • 1.6-6. Check shape and size of the elements. Inner angles should not deviate too much from those of a regularly shaped element. Use the tools provided by your software to highlight critical elements. Keep in mind that critical regions may be situated inside a 3D-component and may not be directly visible. Avoid badly-shaped elements especially in region where high gradients occur and in regions of interest.

  • 1.6-7. If you use local mesh refinement, the transition between regions of different element sizes should be smooth. As a rule of thumb, adjacent elements should not differ by more than a factor of 2–3 in their area (or volume). If the transition is too abrupt, spurious stresses may occur in this region because a region that is meshed finer is usually less stiff. Furthermore, the fine mesh may be constrained by the coarser mesh. (As an extreme case, consider a finely meshed quadratic region that is bounded by only four first-order elements – in this case, the region as a whole can only deform as a parallelogram, no matter how fine the interior mesh is.)

  • 1.6-8. Be aware that local mesh refinement may strongly affect the simulation time in an explicit simulation because the stable time increment is determined by the size of the smallest element in the structure. A single small or badly shaped element can drastically increase the simulation time.

  • 1.6-9. If elements are distorting strongly, remeshing may improve the shape of the elements and the solution quality. For this, solution variables have to be interpolated from the old to the new mesh. This interpolation may dampen strong gradients or local extrema. Make sure that this effect is sufficiently small by comparing the solution before and after the remeshing in a contour plot and at the integration points.

  • 1.6-10. Another way of dealing with strong mesh distortions is to start with a mesh that is initially distorted and becomes more regular during deformation. This method usually requires some experimentation, but it may yield good solutions without the additional effort of remeshing.

1.7 Defining contact problems

  • 1.7-1. Correctly choose master and slave surfaces in a master-slave algorithm. In general, the stiffer (and more coarsely meshed) surface should be the master.

  • 1.7-2. Problems may occur if single nodes get in contact and if surfaces with corners are sliding against each other. Smoothing the surfaces may be helpful.

  • 1.7-3. Nodes of the master surface may penetrate the slave surface; again, smoothing the surfaces may reduce this, see fig. 7.

  • 1.7-4. Some discretization error is usually unavoidable if curved surfaces are in contact. With a pure master-slave algorithm, penetration and material overlap are the most common problem; with a symmetric choice (both surfaces are used as master and as slave), gaps may open between the surfaces, see fig. 8. Check for discretization errors in the postprocessor.

  • 1.7-5. Discretization errors may also affect the contact force. Consider, for example, the Hertzian contact problem of two cylinders contacting each other. If the mesh is coarse, there will be a notable change in the contact force whenever the next node comes into contact. Spurious oscillations of the force may be caused by this.

  • 1.7-6. Make sure that rigid-body motion of contact partners before the contact is established is removed either by adding appropriate constraints or by using a stabilization procedure.

  • 1.7-7. Second-order elements may cause problems in contact (see 1.5-4 and fig. 4) [4, 11]; if they do, try switching to first-order elements.

1.8 Other considerations

  • 1.8-1. If you are inexperienced in using finite elements, start with simple models. Do not try to directly set up a complex model from scratch and make sure that you understand what your program does and what different options are good for. It is almost impossible to find errors in a large and complex model if you do not have long experience and if you do not know what results you expect beforehand.

  • 1.8-2. Many parameters that are not specified by the user are set to default values in finite element programs. You should check whether these defaults are correct; especially for those parameters that directly affect the solution (like element types, material definitions etc.). If you do not know what a parameter does and whether the default is appropriate, consult the manual. For parameters that only affect the efficiency of the solution (for example, which solution scheme is used to solve matrix equations), understanding the parameters is less important because a wrongly chosen parameter will not affect the final solution, but only the CPU time or whether a solution is found at all.

  • 1.8-3. Modern finite element software is equipped with a plethora of complex special techniques (XFEM, element deletion, node separation, adaptive error-controlled mesh-refinement, mixed Eulerian-Lagrangian methods, particle based methods, fluid-structure interaction, multi-physics, user-defined subroutines etc.). If you plan to use these techniques, make sure that you understand them and test them using simple models. If possible, build up a basic model without these features first and then add the complex behaviour. Keep in mind that the impressive simulations you see in presentations were created by experts and may have been carefully selected and may not be typical for the performance.

2. Solving the model

Even if your model is solved without any convergence problems, nevertheless look at the log file written by the solver to check for warning messages. They may be harmless, but they may indicate some problem in defining your model.

Convergence problems are usually reported by the program with warning or error messages. You can also see that your model has not converged if the final time in the time step is not the end time you specified in the model definition.

There are two reasons for convergence problems: On the one hand, the solution algorithm may fail to find a solution albeit a solution of the problem does exist. In this case, modifying the solution algorithm may solve the problem (see section 2.2). On the other hand, the problem definition may be faulty so that the problem is unstable and does not have a solution (section 2.3).

If you are new to finite element simulations, you may be tempted to think that these errors are simply caused by specifying an incorrect option or forgetting something in the model definition. Errors of this type exist as well, but they are usually detected before calculation of your model begins (and are not discussed here). Instead, treat the non-convergence of your simulation in the same way as any other scientific problem. Formulate hypotheses why the simulation fails to converge. Modify your model to prove² or disprove these hypotheses to find the cause of the problems.

²Of course natural science is not dealing with “proofs”, but this is not the place to think about the philosophy of science. Replace “prove” with “strengthen” or “find evidence for” if you like.

2.1 General considerations

  • 2.1-1. In an implicit simulation, the size of the time increments is usually automatically controlled by the program. If convergence is difficult, the time increments are reduced.³ Usually, the program stops if the time increment is too small or if the convergence problems persist even after several cutbacks of the time increment. (In Abaqus, you get the error messages Time increment smaller than minimum or Too many attempts, respectively.) These messages themselves thus do not tell you anything about the reason for the convergence problems. To find the cause of the convergence problems, look at the solver log file in the increment(s) before the final error message. You will probably see warnings that tell you what kind of convergence problem was responsible (for example, the residual force is too large, the contact algorithm did not converge, the temperature increments were too large). If available, also look at the unconverged solution and compare it to the last, converged timestep. Frequently, large changes in some quantity may indicate the location of the problem.

  • 2.1-2. Use the postprocessor to identify the node with the largest residual force and the largest change in displacement in the final increment. Often (but not always) this tells you where the problem in the model occurs. (Apply the same logic in a thermal simulation looking at the temperature changes and heat fluxes.)

  • 2.1-3. If the first increment does not converge, set the size of the first time increment to a very small value. If the problem persist, the model itself may be unstable (missing boundary conditions, initial overlap of contacting surfaces). To find the cause of the problem, you can remove all external loads step by step or add further boundary conditions to make sure that the model is properly constrained (if you pin two nodes for each component, rigid body movements should be suppressed – if the model converges in this case, you probably did not have sufficient boundary conditions in your original model). Alternatively or additionally, you may add numerical stabilization to the problem definition. (In numerical stabilization, artificial friction is added to the movement of nodes so that stabilizing forces are generated if nodes start to move rapidly.) However, make sure that the stabilization does not affect your results too strongly. Also check for abrupt jumps in some boundary conditions, for example a finite displacement that is defined at the beginning of a step or a sudden jump in temperature or load. If you apply a load instantaneously, cutting back the time increments does not help the solution process. If this occurs, ramp your load instead.

  • 2.1-4. Avoid rapid changes in an amplitude within a calculation step (see also 1.2-7 and 1.3-5). For example, if you hold a heat flux (or temperature or stress) for a long time and then abruptly reduce it within the same calculation step, the time increment will suddenly jump to a point where the temperature is strongly reduced. This abrupt change may cause convergence problems. Define a second step and choose small increments at the beginning of the second step where large changes in the model can be expected.

  • 2.1-5. Try the methods described in section 2.2 to see whether the problem can be resolved by changing the solution algorithm.

  • 2.1-6. Sometimes, it is the calculation of the material law at an integration point that does not converge (to calculate stresses from strains at integration point inside the solver, another Newton algorithm is used at each integration point [3]). If this is the case, the material definition may be incorrect or problematic (for example, due to incorrectly specified material parameters or because there is extreme softening at a point).

  • 2.1-7. Simplify your model step by step to find the reason of the convergence problems. Use simpler material laws (simple plasticity instead of damage, elasticity instead of plasticity), switch off non-linear geometry, remove external loads etc. If the problem persists, try to create a minimum example – the smallest example you can find that shows the same problem. This has several advantages: the minimum example is easier to analyse, needs less computing time so that trying things is faster, and it can also be shown to others if you are looking for help (see section 4).

  • 2.1-8. If your simulation is static, switching to an implicit dynamic simulation may help because the inertial forces act as natural stabilizers. If possible, use a quasi-static option.

  • 2.1-9. Explicit simulations usually have less convergence problems. A frequently-heard advice to solve convergence problems is to switch from implicit to explicit models. I strongly recommend to only switch from implicit static to explicit quasi-static for convergence reasons if you understand the reasons of the convergence problems and cannot overcome them with the techniques described here. You should also keep in mind that explicit programs may offer a different functionality (for example, different element types). If your problem is static, you can only use a quasi-static explicit analysis which may also have problems (see 1.2-4). Be aware that in an explicit simulations, elastic waves may occur that may change the stress patterns.

³The rationale behind this is that the solution from the previous increment is a better initial guess for the next increment if the change in the load is reduced.

2.2 Modifying the solution algorithm

If your solution algorithm does not converge for numerical reasons, these modifications may help. They are useless if there is a true model instability, see section 2.3.

  • 2.2-1. Finite element programs use default values to control the Newton iterations. If no convergence is reached after a fixed number of iterations, the time step is cut back. In strongly non-linear problems, these default values may be too tight. For example, Abaqus cuts back on the time increment if the Newton algorithm does not converge after 4 iterations; setting this number to a larger value is often sufficient to reach convergence (for example, by adding *Controls, analysis=discontinuous to the input file).

  • 2.2-2. If the Newton algorithm does not converge, the time increment is cut back. If it becomes smaller than a pre-defined minimum value, the simulation stops with an error message. This minimum size of the time increment can be adjusted. Furthermore, if a sudden loss in stability (or change in load) occurs so that time increments need to be changed by several orders of magnitude, the number of cutbacks also needs to be adapted (see next point). In this case, another option is to define a new time step (see 2.1-4) that starts at this critical point and that has a small initial increment.

  • 2.2-3. The allowed number of cutbacks per increment can also be adapted (in Abaqus, use *CONTROLS, parameters=time incrementation). This may be helpful if the simulation proceeds at first with large increments before some difficulty is reached – allowing for a larger number of cutbacks enables the program to use large timesteps at the beginning. Alternatively, you can reduce the maximum time increment (so that the size of the necessary cutback is reduced) or you can split your simulation step in two with different time incrementation settings in the step where the problem occurs (see 2.1-4).

  • 2.2-4. Be aware that the previous two points will work sometimes, but not always. There is usually no sense in allowing a smallest time increment that is ten or twenty orders of magnitude smaller than the step size or to allow for dozens of cutbacks, this only increases the CPU time.

  • 2.2-5. Depending on your finite element software, there may be many more options to tune the solution process. In Abaqus, for example, the initial guess for the solution of a time increment is calculated by extrapolation from the previous steps. Usually this improves convergence, but it may cause problems if something in the model changes abruptly. In this case, you can switch the extrapolation off (STEP, extrapolation=no). You can also add a line search algorithm that scales the calculated displacements to find a better solution (CONTROLS, parameters=line search). Consult the manual for options to improve convergence.

  • 2.2-6. While changing the iteration control (as explained in the previous points) is often needed to achieve convergence, the solution controls that are used to determine whether a solution has converged should only be changed if absolutely necessary. Only do so (in Abaqus, use *CONTROLS, parameters=field) if you know exactly what you are doing. One example where changing the controls may be necessary is when the stress is strongly concentrated in a small part of a very large structure [5]. In this case, an average nodal force that is used to determine convergence may impose too strong a constraint on the convergence of the solution, so that convergence should be based on local forces in the region of stress concentration. Be aware that since forces, not stresses, are used in determining the convergence, changing the mesh density requires changing the solution controls. Make sure that the accepted solution is indeed a solution and that your controls are sufficiently strict. Vary the controls to ensure that their value does not affect the solution.

  • 2.2-7. Contact problems sometimes do not converge due to problems in establishing which nodes are in contact (sometimes called “zig-zagging” [14]). This often happens if the first contact is made by a single node. Smoothing the contact surfaces may help.

  • 2.2-8. If available and possible, use general contact definitions where the contact surfaces are determined automatically.

  • 2.2-9. If standard contact algorithms do not converge, soft contact formulations (which implement a soft transition between “no contact” and “full contact”) may improve convergence; however, they may allow for some penetration of the surfaces and thus affect the results.

2.3 Finding model instabilities

A model is unstable if there actually is no solution to the mechanical problem.

  • 2.3-1. Instabilities are frequently due to a loss in load bearing capacity of the structure. There are several reasons for that:

    • The material definition may be incorrect. If, for example, a plastic material is defined without hardening, the load cannot increase after the component has fully plastified. Simple typos or incorrectly used units may also cause a loss in material strength.

    • Thermal softening (the reduction of strength with increasing temperature) may cause an instability in a thermo-mechanical problem.

    • Non-linear geometry may cause an instability because the cross section of a load-bearing component reduces during deformation.

    • A change in contact area, a change from sticking to sliding in a simulation with friction or a complete loss of contact between two bodies may also cause instabilities because the structure may not be able to bear an increase in the load.

  • 2.3-2. Local instabilities may cause highly distorted meshes that prevent convergence. It may be helpful to define the mesh in such a way that elements become more regular during deformation (see also 1.6-10).

  • 2.3-3. If your model is load-controlled (a force is applied), switch to a displacement-controlled loading. This avoids instabilities due to loss in load-bearing capacity.

  • 2.3-4. Artificial damping (stabilization) may be added to stabilize an unstable model. However, check carefully that the solution is not unduly affected by this. Adding artificial damping may also help to determine the cause of the instability. If your model converges with damping, you know that an instability is present.

2.4 Problems in explicit simulations

As already stated in 2.1-9, explicit simulations have less convergence problems than implicit simulations. However, sometimes even an explicit simulation may run into trouble.

  • 2.4-1. During simulation, elements may distort excessively. This may happen for example if a concentrated load acts on a node or if the displacement of a node becomes very large due to a loss in stability (for example in a damage model). In this case, the element shape might become invalid (crossing over of element edges, negative volumes at integration points etc.). If this happens, changing the mesh might help – elements that have a low quality (large aspect ratio, small initial volume) are especially prone to this type of problem. Note that second-order elements are often more sensitive to this problem than first-order elements.

  • 2.4-2. The stable time increment in an explicit simulation is given by the time a sound wave needs to travel through the smallest element. If elements distort strongly, they may become very thin in one direction so that the stable time increment becomes unreasonably small. In this case, changing the mesh might help.

3. Postprocessing

There are two aspects to checking that a model is correct: Verification is the process of showing that the model was correctly specified and actually does what it was created to do (loads, boundary conditions, material behaviour etc. are correct). Validation means to check the model by making an independent prediction (i. e., a prediction that was not used in specifying or calibrating the model) and checking this prediction in some other way (for example, experimentally).⁴

General advice: If you modify your model significantly (because you build up a complicated model in steps, have to correct errors or add more complex material behaviour to get agreement with experimental results etc.), you should again check the model. It is not clear that the mesh density that was sufficient for your initial model is still sufficient for the modified model. The same is true for other considerations (like the choice of element type etc.).

⁴Note that the terms “verification” and “validation” are used differently in different fields.

3.1 Checking the plausibility and verifying the model

  • 3.1-1. Check the plausibility of your results. If your simulation deviates from your intuition, continue checking until you are sure that you understand why your intuition (or the simulation) was incorrect. Never believe a result of a simulation that you do not understand and that should be different according to your intuition. Either the model or your understanding of the physical problem is incorrect – in both cases, it is important to understand all effects.

  • 3.1-2. Check your explanations for the solution, possibly with additional simulations. For example, if you assume that thermal expansion is the cause of a local stress maximum, re-run the simulation with a different or vanishing coefficient of thermal expansion. Predict the results of such a simulation and check whether your prediction was correct.

  • 3.1-3. Check all important solution variables. Even if you are only interested in, for example, displacements of a certain point, check stresses and strains throughout the model.

  • 3.1-4. In 3D-simulations, do not only look at contour plots of the component’s surface; also check the results inside the component by cutting through it.

  • 3.1-5. Make sure you understand which properties are vectors or tensors. Which component of stresses or strains are relevant depends on your model, the material, and the question you are trying to answer. Default settings of the postprocessor are not always appropriate, for example, Abaqus plots the von-Mises-stress as default stress variable, which is not very helpful for ceramic materials.

  • 3.1-6. Check the boundary conditions again. Are all nodes constrained in the desired manner? Exaggerating the deformation (use Common plot options in Abaqus) or picking nodes with the mouse may be helpful to check this precisely.

  • 3.1-7. Check the mesh density (see 1.6-5). If possible, calculate the model with different mesh densities (possibly for a simplified problem) and make sure that the mesh you finally use is sufficiently fine. When comparing different meshes, the variation in the mesh density should be sufficiently large to make sure that you can actually see an effect.

  • 3.1-8. Check the mesh quality again, paying special attention on regions where gradients are large. Check that the conditions explained in section 1.6 (element shapes and sizes, no strong discontinuities in the element sizes) are fulfilled and that discontinuities in the stresses are not due to a change in the numerical stiffness (due to a change in the integration scheme or element size).

  • 3.1-9. Check that stresses are continuous between elements. At interfaces between different materials, check that normal stresses and tangential strains are continuous.

  • 3.1-10. Check that the normal stress at any free surface is zero.

  • 3.1-11. Check the mesh density at contact surfaces: can the actual movement and deformation of the surfaces be represented by the mesh? For example, if a mesh is too coarse, nodes may be captured in a corner or a surface may not be able to deform correctly.

  • 3.1-12. Keep in mind that discretization errors at contact surfaces also influence stresses and strains. If you use non-standard contact definitions (2.2-9), try to evaluate how these influence the stresses (for example by comparing actual node positions with what you would expect for hard contact).

  • 3.1-13. Watch out for divergencies. The stress at a sharp notch or crack tip is theoretically infinite – the value shown by your program is then solely determined by the mesh density and, if you use a contour plot, by the extrapolation used by the postprocessor (see 3.2-1).

  • 3.1-14. In dynamic simulations, elastic waves propagate through the structure. They may dominate the stress field. Watch out for reflections of elastic waves and keep in mind that, in reality, these waves are dampened.

  • 3.1-15. If you assumed linear geometry, check whether strains and deformations are sufficiently small to justify this assumption, see 1.2-8.

3.2 Implementation issues

  • 3.2-1. Quantities like stresses or strains are only defined at integration points. Do not rely on extreme values from a contour plot – these values are extrapolated. It strongly depends on the problem whether these extrapolated values are accurate or not. For example, in an elastic material, the extrapolation is usually reasonable, in an ideally-plastic material, extrapolated von Mises stresses may exceed the actual yield stress by a factor of 2 or more. Furthermore, the contour lines themselves may show incorrect maxima or minima, see fig. 9 for an example.

  • 3.2-2. It is often helpful to use “quilt” plots where each element is shown in a single color averaged from the integration point values (see also fig. 9).

  • 3.2-3. The frequently used rainbow color spectrum has been shown to be misleading and should not be used [8]. Gradients may be difficult to interpret because human color vision has a different sensitivity in different parts of the spectrum. Furthermore, many people have a color vision deficiency and are unable to discern reds, greens and yellows. For variables that run from zero to a maximum value (temperature, von-Mises stress), use a sequential spectrum (for example, from black to red to yellow), for variables that can be positive and negative, use a diverging spectrum with a neutral color at zero, see fig. 10.

  • 3.2-4. Discrete time-stepping (see 1.2-7) may also influence the post-processing of results. If you plot the stress-strain curve of a material point by connecting values measured at the discrete simulation times, the resulting curve will not coincide perfectly with the true stress-strain although the data points themselves are correct.

  • 3.2-5. Complex simulation techniques (like XFEM, element deletion etc., see 1.8-3) frequently use internal parameters to control the simulation that may affect the solution process. Do not rely on default values for these parameters and check that the values do not affect the solution inappropriately.

  • 3.2-6. If you use element deletion, be aware that removing elements from the simulation is basically an unphysical process since material is removed. This may affect the energy balance or stress fields near the removed elements. For example, in models of machining processes, removing elements at the tool tip to separate the material strongly influences the residual stress field.

3.3 Validation

  • 3.3-1. If possible, use your model to make an independent prediction that can be tested.

  • 3.3-2. If you used experimental data to adapt unknown parameters (see 1.4), correctly reproducing these data with the model does not validate it, but only verifies it.

  • 3.3-3. The previous point also holds if you made a prediction and afterwards had to change your model to get agreement with an experiment. After this model change, the experiment cannot be considered an independent verification.

4. Getting help

If you cannot solve your problem, you can try to get help from the support of your software (provided you are entitled to support) or also from the internet (for example on ResearchGate or iMechanica). To get helpful answers, please observe the following points:

  • 4-1. Check that you have read relevant pages in the manual and that your question is not answered there.

  • 4-2. Describe your problem as precisely as possible. Which error did occur? What was the exact error message and which warnings did occur? Show pictures of the model and describe the model (which element type, which material, what kind of problem – static, dynamic, explicit, implicit etc.).

  • 4-3. If possible, provide a copy of your model or, even better, provide a minimum example that shows the problem (see 2.1-7).

  • 4-4. If you get answers to your request, give feedback whether this has solved your problem, especially if you are in an internet forum or similar. People are sacrificing their time to help you and will be interested to see whether their advice was actually helpful and what the solution to the problem was. Providing feedback will also help others who find your post because they are facing similar problems.

Acknowledgement

Thanks to Philipp Seiler for many discussions and for reading a draft version of this manuscript, and to Axel Reichert for sharing his experience on getting models to converge.

References

[1] F Armero. On the locking and stability of finite elements in finite deformation plane strain problems. Computers & Structures, 75(3):261–290, 2000. [2] CAE associates. Practical FEA simulations. https://caeai.com/blog/practical-fea-simulations?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=caeai. Accessed 31.5.2017. [3] Martin Bäker. Numerische Methoden in der Materialwissenschaft. Fachbereich Maschinenbau der TU Braunschweig, 2002. [4] Martin Bäker, Stefanie Reese, and Vadim V. Silberschmidt. Simulation of crack propagation under mixed-mode loading. In Siegfried Schmauder, Chuin-Shan Chen, Krishan K. Chawla, Nikhilesh Chawla, Weiqiu Chen, and Yutaka Kagawa, editors, Handbook of Mechanics of Materials. Springer Singapore, Singapore, 2018. [5] Martin Bäker, Joachim Rösler, and Carsten Siemers. A finite element model of high speed metal cutting with adiabatic shearing. Computers & Structures, 80(5):495–513, 2002. [6] Martin Bäker and Aviral Shrot. Inverse parameter identification with finite element simulations using knowledge-based descriptors. Computational Materials Science, 69:128–136, 2013. [7] Klaus-Jürgen Bathe. Finite element procedures. Klaus-Jurgen Bathe, 2006. [8] David Borland and Russell M Taylor II. Rainbow color map (still) considered harmful. IEEE computer graphics and applications, (2):14–17, 2007. [9] Dassault Systems. Abaqus Manual, 2017. [10] Guido Dhondt. The Finite Element Method for Three-Dimensional Thermomechanical Applications. Wiley, 2004. [11] Ronald Krueger. Virtual crack closure technique: History, approach, and applications. Applied Mechanics Reviews, 57(2):109, 2004. [12] AM Prior. Applications of implicit and explicit finite element techniques to metal forming. Journal of Materials Processing Technology, 45(1):649–656, 1994. [13] Joachim Rösler, Harald Harders, and Martin Bäker. Mechanical behaviour of engineering materials: metals, ceramics, polymers, and composites. Springer Science & Business Media, 2007. [14] Peter Wriggers and Tod A Laursen. Computational contact mechanics, volume 30167. Springer, 2006.


n5321 | 2025年6月15日 23:24

A Possible First Use of CAM/CAD


Norman Sanders Cambridge Computer Lab Ring, William Gates Building, Cambridge, England ProjX, Walnut Tree Cottage, Tattingstone Park, Ipswich, Suffolk IP9 2NF, England


Abstract

This paper is a discussion of the early days of CAM-CAD at the Boeing Company, covering the period approximately 1956 to 1965. This period saw probably the first successful industrial application of ideas that were gaining ground during the very early days of the computing era. Although the primary goal of the CAD activity was to find better ways of building the 727 airplane, this activity led quickly to the more general area of computer graphics, leading eventually to today’s picture-dominated use of computers.

Keywords: CAM, CAD, Boeing, 727 airplane, numerical-control.


1. Introduction to Computer-Aided Design and Manufacturing

Some early attempts at CAD and CAM systems occurred in the 1950s and early 1960s. We can trace the beginnings of CAD to the late 1950s when Dr. Patrick J. Hanratty developed Pronto, the first commercial numerical-control (NC) programming system. In 1960, Ivan Sutherland at MIT's Lincoln Laboratory created Sketchpad, which demonstrated the basic principles and feasibility of computer-aided technical drawing.

There seems to be no generally agreed date or place where Computer-Aided Design and Manufacturing saw the light of day as a practical tool for making things. However, I know of no earlier candidate for this role than Boeing’s 727 aircraft. Certainly the dates given in the current version of Wikipedia are woefully late; ten years or so.

So, this section is a description of what we did at Boeing from about the mid-fifties to the early sixties. It is difficult to specify precisely when this project started – as with most projects. They don’t start, but having started they can become very difficult to finish. But at least we can talk in terms of mini eras, approximate points in time when ideas began to circulate and concrete results to emerge.

Probably the first published ideas for describing physical surfaces mathematically was Roy Liming’s Practical Analytic Geometry with Applications to Aircraft, Macmillan, 1944. His project was the Mustang fighter. However, Liming was sadly way ahead of his time; there weren’t as yet any working computers or ancillary equipment to make use of his ideas. Luckily, we had a copy of the book at Boeing, which got us off to a flying start. We also had a mighty project to try our ideas on – and a team of old B-17/29 engineers who by now were running the company, rash enough to allow us to commit to an as yet unused and therefore unproven technology.

Computer-aided manufacturing (CAM) comprises the use of computer-controlled manufacturing machinery to assist engineers and machinists in manufacturing or prototyping product components, either with or without the assistance of CAD. CAM certainly preceded CAD and played a pivotal role in bringing CAD to fruition by acting as a drafting machine in the very early stages. All early CAM parts were made from the engineering drawing. The origins of CAM were so widespread that it is difficult to know whether any one group was aware of another. However, the NC machinery suppliers, Kearney & Trecker etc, certainly knew their customers and would have catalysed their knowing one another, while the Aero-Space industry traditionally collaborated at the technical level however hard they competed in the selling of airplanes.

2. Computer-Aided Manufacturing (CAM) in the Boeing Aerospace Factory in Seattle

(by Ken McKinley)

The world’s first two computers, built in Manchester and Cambridge Universities, began to function as early as 1948 and 1949 respectively, and were set to work to carry out numerical computations to support the solution of scientific problems of a mathematical nature. Little thought, if any, was entertained by the designers of these machines to using them for industrial purposes. However, only seven years later the range of applications had already spread out to supporting industry, and by 1953 Boeing was able to order a range of Numerically-Controlled machine tools, requiring computers to transform tool-makers’ instructions to machine instructions. This is a little remembered fact of the early history of computers, but it was probably the first break of computer application away from the immediate vicinity of the computer room.

The work of designing the software, the task of converting the drawing of a part to be milled to the languages of the machines, was carried out by a team of about fifteen people from Seattle and Wichita under my leadership. It was called the Boeing Parts-Programming system, the precursor to an evolutionary series of Numerical Control languages, including APT – Automatically Programmed Tooling, designed by Professor Doug Ross of MIT. The astounding historical fact here is that this was among the first ever computer compilers. It followed very closely on the heels of the first version of FORTRAN. Indeed it would be very interesting to find out what, if anything preceded it.

As early as it was in the history of the rise of computer languages, members of the team were already aficionados of two rival contenders for the job, FORTRAN on the IBM 704 in Seattle, and COBOL on the 705 in Wichita. This almost inevitably resulted in the creation of two systems (though they appeared identical to the user): Boeing and Waldo, even though ironically neither language was actually used in the implementation. Remember, we were still very early on in the development of computers and no one yet had any monopoly of wisdom in how to do anything.

The actual programming of the Boeing system was carried out in computer machine language rather than either of the higher-level languages, since the latter were aimed at a very different problem area to that of determining the requirements of machine tools.

A part of the training of the implementation team consisted of working with members of the Manufacturing Department, probably one of the first ever interdisciplinary enterprises involving computing. The computer people had to learn the language of the Manufacturing Engineer to describe aluminium parts and the milling machine processes required to produce them. The users of this new language were to be called Parts Programmers (as opposed to computer programmers).

A particularly tough part of the programming effort was to be found in the “post processors”, the detailed instructions output from the computer to the milling machine. To make life interesting there was no standardisation between the available machine tools. Each had a different physical input mechanism; magnetic tape, analog or digital, punched Mylar tape or punched cards. They also had to accommodate differences in the format of each type of data. This required lots of discussion with the machine tool manufacturers - all very typical of a new industry before standards came about.

A memorable sidelight, just to make things even more interesting, was that Boeing had one particular type of machine tool that required analog magnetic tape as input. To produce it the 704 system firstly punched the post processor data into standard cards. These were then sent from the Boeing plant to downtown Seattle for conversion to a magnetic tape, then back to the Boeing Univac 1103A for conversion from magnetic to punched tape, which was in turn sent to Wichita to produce analog magnetic tape. This made the 1103A the world’s largest, most expensive punched tape machine. As a historical footnote, anyone brought up in the world of PCs and electronic data transmission should be aware of what it was like back in the good old days!

Another sidelight was that detecting and correcting parts programming errors was a serious problem, both in time and material. The earliest solution was to do an initial cut on wood or plastic foam, or on suitable machine tools, to replace the cutter with a pen or diamond scribe to ‘draw’ the part. Thus the first ever use of an NC machine tool as a computer-controlled drafting machine, a technique vital later to the advent of Computer-Aided Design.

Meanwhile the U. S. Air Force recognised that the cost and complication of the diverse solutions provided by their many suppliers of Numerical Control equipment was a serious problem. Because of the Air Force’s association with MIT they were aware of the efforts of Professor Doug Ross to develop a standard NC computer language. Ken McKinley, as the Boeing representative, spent two weeks at the first APT (Automatic Programmed Tooling) meeting at MIT in late 1956, with representatives from many other aircraft-related companies, to agree on the basic concepts of a common system where each company would contribute a programmer to the effort for a year. Boeing committed to support mainly the ‘post processor’ area. Henry Pinter, one of their post-processor experts, was sent to San Diego for a year, where the joint effort was based. As usually happened in those pioneering days it took more like 18 months to complete the project. After that we had to implement APT in our environment at Seattle.

Concurrently with the implementation we had to sell ourselves and the users on the new system. It was a tough sell believe me, as Norm Sanders was to discover later over at the Airplane Division. Our own system was working well after overcoming the many challenges of this new technology, which we called NC. The users of our system were not anxious to change to an unknown new language that was more complex. But upper management recognized the need to change, not least because of an important factor, the imminence of another neophytic technology called Master Dimensions.

3. Computer-Aided Design (CAD) in the Boeing Airplane Division in Renton

(by Norman Sanders)

The year was 1959. I had just joined Boeing in Renton, Washington, at a time when engineering design drawings the world over were made by hand, and had been since the beginning of time; the definition of every motorcar, aircraft, ship and mousetrap consisted of lines drawn on paper, often accompanied by mathematical calculations where necessary and possible. What is more, all animated cartoons were drawn by hand. At that time, it would have been unbelievable that what was going on in the aircraft industry would have had any effect on The Walt Disney Company or the emergence of the computer games industry. Nevertheless, it did. Hence, this is a strange fact of history that needs a bit of telling.

I was very fortunate to find myself working at Boeing during the years following the successful introduction of its 707 aircraft into the world’s airlines. It exactly coincided with the explosive spread of large computers into the industrial world. A desperate need existed for computer power and a computer manufacturer with the capacity to satisfy that need. The first two computers actually to work started productive life in 1948 and 1949; these were at the universities of Manchester and Cambridge in England. The Boeing 707 started flying five years after that, and by 1958, it was in airline service. The stage was set for the global cheap travel revolution. This took everybody by surprise, not least Boeing. However, it was not long before the company needed a shorter-takeoff airplane, namely the 727, a replacement for the Douglas DC-3. In time, Boeing developed a smaller 737, and a large capacity airplane – the 747. All this meant vast amounts of computing and as the engineers got more accustomed to using the computer there was no end to their appetite.

And it should perhaps be added that computers in those days bore little superficial similarity to today’s computers; there were certainly no screens or keyboards! Though the actual computing went at electronic speeds, the input-output was mechanical - punched cards, magnetic tape and printed paper. In the 1950s, the computer processor consisted of vacuum tubes, the memory of ferrite core, while the large-scale data storage consisted of magnetic tape drives. We had a great day if the computer system didn’t fail during a 24 hour run; the electrical and electronic components were very fragile.

We would spend an entire day preparing for a night run on the computer. The run would take a few minutes and we would spend the next day wading through reams of paper printout in search of something, sometimes searching for clues to the mistakes we had made. We produced masses of paper. You would not dare not print for fear of letting a vital number escape. An early solution to this was faster printers. About 1960 Boeing provided me with an ANalex printer. It could print one thousand lines a minute! Very soon, of course, we had a row of ANalex printers, wall to wall, as Boeing never bought one of anything. The timber needed to feed our computer printers was incalculable.

4. The Emergence of Computer Plots

With that amount of printing going on it occurred to me to ask the consumers of printout what they did with it all. One of the most frequent answers was that they plotted it. There were cases of engineers spending three months drawing curves resulting from a single night’s computer run. A flash of almost heresy then struck my digital mind. Was it possible that we could program a digital computer to draw (continuous) lines? In the computing trenches at Boeing we were not aware of the experimentation occurring at research labs in other places. Luckily at Boeing we were very fortunate at that time to have a Swiss engineer in our computer methods group who could both install hardware and write software for it; he knew hardware and software, both digital and analog. His name was Art Dietrich. I asked Art about it, which was to me the unaskable; to my surprise Art thought it was possible. So off he went in search of a piece of hardware that we could somehow connect to our computer that could draw lines on paper.

Art found two companies that made analog plotters that might be adaptable. One company was Electro Instruments in San Diego and the other was Electronic Associates in Long Branch, New Jersey. After yo-yoing back and forth, we chose the Electronic Associates machine. The machine could draw lines on paper 30x30 inches, at about twenty inches per second. It was fast! But as yet it hadn’t been attached to a computer anywhere. Moreover, it was accurate - enough for most purposes. To my knowledge, this was the first time anyone had put a plotter in the computer room and produced output directly in the form of lines. It could have happened elsewhere, though I was certainly not aware of it at the time. There was no software, of course, so I had to write it myself. The first machine ran off cards punched as output from the user programs, and I wrote a series of programs: Plot1, Plot2 etc. Encouraged by the possibility of selling another machine or two around the world, the supplier built a faster one running off magnetic tape, so I had to write a new series of programs: Tplot1, Tplot2, etc, (T for tape). In addition, the supplier bought the software from us - Boeing’s first software sale!

While all this was going on we were pioneering something else. We called it Master Dimensions. Indeed, we pioneered many computing ideas during the 1960s. At that time Boeing was probably one of the leading users of computing worldwide and it seemed that almost every program we wrote was a brave new adventure. Although North American defined mathematically the major external surfaces of the wartime Mustang P-51 fighter, it could not make use of computers to do the mathematics or to construct it because there were no computers. An account of this truly epochal work appears in Roy Liming’s book.

By the time the 727 project was started in 1960, however, we were able to tie the computer to the manufacturing process and actually define the airplane using the computer. We computed the definition of the outer surface of the 727 and stored it inside the computer, making all recourse to the definition via a computer run, as opposed to an engineer looking at drawings using a magnifying glass. This was truly an industrial revolution.

Indeed, when I look back on the history of industrial computing as it stood fifty years ago I cringe with fear. It should never have been allowed to happen, but it did. And the reason why it did was because we had the right man, Grant W. Erwin Jr, in the right place, and he was the only man on this planet who could have done it. Grant was a superb leader – as opposed to manager – and he knew his stuff like no other. He knew the mathematics, Numerical Analysis, and where it didn’t exist he created new methods. He was loved by his team; they would work all hours and weekends without a quibble whenever he asked them to do so. He was an elegant writer and inspiring teacher. He knew what everyone was doing; he held the plan in his head. If any single person can be regarded as the inventor of CAD it was Grant. Very sadly he died, at the age of 94, just as the ink of this chapter was drying.

When the Master Dimensions group first wrote the programs, all we could do was print numbers and draw plots on 30x30 inch paper with our novel plotter. Mind-blowing as this might have been it did not do the whole job. It did not draw full scale, highly accurate engineering lines. Computers could now draw but they could not draw large pictures or accurate ones or so we thought.

5. But CAM to the Rescue!

Now there seems to be a widely-held belief that computer-aided design (CAD) preceded computer-aided manufacturing (CAM). All mention of the topic carries the label CAD-CAM rather than the reverse, as though CAD led CAM. However, this was not the case, as comes out clearly in Ken McKinley’s section above. Since both started in the 1956-1960 period, it seems a bit late in the day now to raise an old discussion. However, there may be a few people around still with the interest and the memory to try to get the story right. The following is the Boeing version, at least, as remembered by some long retired participants.

5.1 Numerical Control Systems

The Boeing Aerospace division began to equip its factory about 1956 with NC machinery. There were several suppliers and control systems, among them Kearney & Trecker, Stromberg-Carlson and Thompson Ramo Waldridge (TRW). Boeing used them for the production of somewhat complicated parts in aluminium, the programming being carried out by specially trained programmers. I hasten to say that these were not computer programmers; they were highly experienced machinists known as parts programmers. Their use of computers was simply to convert an engineering drawing into a series of simple steps required to make the part described. The language they used was similar in principle to basic computer languages in that it required a problem to be analyzed down to a series of simple steps; however, the similarity stopped right there. An NC language needs commands such as select tool, move tool to point (x,y), lower tool, turn on coolant. The process required a deep knowledge of cutting metal; it did not need to know about memory allocation or floating point.

It is important to recognize that individual initiative from below very much characterized the early history of computing - much more than standard top-down managerial decisions. Indeed, it took an unconscionable amount of time before the computing bill reached a level of managerial attention. It should not have been the cost, it should have been the value of computing that brought management to the punch. But it wasn’t. I think the reason for that was that we computer folk were not particularly adept at explaining to anyone beyond our own circles what it was that we were doing. We were a corporate ecological intrusion which took some years to adjust to.

5.2 Information Consolidation at Boeing

It happened that computing at Boeing started twice, once in engineering and once in finance. My guess is that neither group was particularly aware of the other at the start. It was not until 1965 or so, after a period of conflict, that Boeing amalgamated the two areas, the catalyst being the advent of the IBM 360 system that enabled both types of computing to cohabit the same hardware. The irony here was that the manufacturing area derived the earliest company tangible benefits from computing, but did not have their own computing organization; they commissioned their programs to be written by the engineering or finance departments, depending more or less on personal contacts out in the corridor.

As Ken McKinley describes above, in the factory itself there were four different control media; punched Mylar tape, 80-column punched cards, analog magnetic tape and digital magnetic tape. It was rather like biological life after the Cambrian Explosion of 570 million years ago – on a slightly smaller scale. Notwithstanding, it worked! Much investment had gone into it. By 1960, NC was a part of life in the Boeing factory and many other American factories. Manufacturing management was quite happy with the way things were and they were certainly not looking for any more innovation. ‘Leave us alone and let’s get the job done’ was their very understandable attitude. Nevertheless, modernisation was afoot, and they embraced it.

The 1950s was a period of explosive computer experimentation and development. In just one decade, we went from 1K to 32K memory, from no storage backup at all to multiple drives, each handling a 2,400-foot magnetic tape, and from binary programming to Fortran 1 and COBOL. At MIT, Professor Doug Ross, learning from the experience of the earlier NC languages, produced a definition for the Automatically Programmed Tooling (APT) language, the intention being to find a modern replacement for the already archaic languages that proliferated the 1950s landscape. How fast things were beginning to move suddenly, though it didn’t seem that way at the time.

5.3 New Beginnings

Since MIT had not actually implemented APT, the somewhat loose airframe manufacturers’ computer association got together to write an APT compiler for the IBM 7090 computers in 1961. Each company sent a single programmer to Convair in San Diego and it took about a year to do the job, including the user documentation. This was almost a miracle, and was largely due to Professor Ross’s well-thought through specification.

When our representative, Henry Pinter, returned from San Diego, I assumed the factory would jump on APT, but they didn’t. At the Thursday morning interdepartmental meetings, whenever I said, “APT is up and running folks, let’s start using it”, Don King from Manufacturing would say, “but APT don’t cut no chips”. (That’s how we talked up there in the Pacific Northwest.) He was dead against these inter-company initiatives; he daren’t commit the company to anything we didn’t have full control over. However, eventually I heard him talking. The Aerospace Division (Ed Carlberg and Ken McKinley) were testing the APT compiler but only up to the point of a printout; no chips were being cut because Aerospace did not have a project at that time. So I asked them to make me a few small parts and some chips swept up from the floor, which they kindly did. I secreted the parts in my bag and had my secretary tape the chips to a piece of cardboard labeled ‘First ever parts cut by APT’. At the end of the meeting someone brought up the question of APT. ‘APT don’t cut no chips’ came the cry, at which point I pulled out my bag from under the table and handed out the parts for inspection. Not a word was spoken - King’s last stand. (That was how we used to make decisions in those days.)

These things happened in parallel with Grant Erwin’s development of the 727-CAD system. In addition, one of the facilities of even the first version of APT was to accept interpolated data points from CAD which made it possible to tie the one system in with the other in what must have been the first ever CAM-CAD system. When I look back on this feature alone nearly fifty years later I find it nothing short of miraculous, thanks to Doug Ross’s deep understanding of what the manufacturing world would be needing. Each recourse to the surface definition was made in response to a request from the Engineering Department, and each numerical cut was given a running Master Dimensions Identifier (MDI) number. This was not today’s CAM-CAD system in action; again, no screen, no light pen, no electronic drawing. Far from it; but it worked! In the early 1960s the system was a step beyond anything that anyone else seemed to be doing - you have to start somewhere in life.

6. Developing Accurate Lines

An irony of history was that the first mechanical movements carried out by computers were not a simple matter of drawing lines; they were complicated endeavors of cutting metal. The computer-controlled equipment was vast multi-ton machines spraying aluminum chips in all directions. The breakthrough was to tame the machines down from three dimensions to two, which happened in the following extraordinary way. It is perhaps one of the strangest events in the history of computing and computing graphics, though I don’t suppose anyone has ever published this story. Most engineers know about CAD; however, I do not suppose anyone outside Boeing knows how it came about.

6.1 So, from CAM to CAD

Back to square one for a moment. As soon as we got the plotter up and running, Art Dietrich showed some sample plots to the Boeing drafting department management. Was the plotting accuracy good enough for drafting purposes? The answer - a resounding No! The decision was that Boeing would continue to draft by hand until the day someone could demonstrate something that was superior to what we were able to produce. That was the challenge. However, how could we meet that challenge? Boeing would not commit money to acquiring a drafting machine (which did not exist anyway) without first subjecting its output to intense scrutiny. Additionally, no machine tool company would invest in such an expensive piece of new equipment without an order or at least a modicum of serious interest. How do you cut this Gordian knot?

In short, at that time computers could master-mind the cutting of metal with great accuracy using three-dimensional milling machines. Ironically, however, they could not draw lines on paper accurately enough for design purposes; they could do the tough job but not the easy one.

However, one day there came a blinding light from heaven. If you can cut in three dimensions, you can certainly scratch in two. Don’t do it on paper; do it on aluminium. It had the simplicity of the paper clip! Why hadn’t we thought of that before? We simply replaced the cutter head of the milling machine with a tiny diamond scribe (a sort of diamond pen) and drew lines on sheets of aluminium. Hey presto! The computer had drawn the world’s first accurate lines. This was done in 1961.

The next step was to prove to the 727 aircraft project manager that the definition that we had of the airplane was accurate, and that our programs worked. To prove it they gave us the definition of the 707, an aircraft they knew intimately, and told us to make nineteen random drawings (canted cuts) of the wing using this new idea. This we did. We trucked the inscribed sheets of aluminium from the factory to the engineering building and for a month or so engineers on their hands and knees examined the lines with microscopes. The Computer Department held its breath. To our knowledge this had never happened before. Ever! Anywhere! We ourselves could not be certain that the lines the diamond had scribed would match accurately enough the lines drawn years earlier by hand for the 707. At the end of the exercise, however, industrial history emerged at a come-to-God meeting. In a crowded theatre the chief engineer stood on his feet and said simply that the design lines that the computer had produced had been under the microscope for several weeks and were the most accurate lines ever drawn - by anybody, anywhere, at any time. We were overjoyed and the decision was made to build the 727 with the computer. That is the closest I believe anyone ever came to the birth of Computer-Aided Design. We called it Design Automation. Later, someone changed the name. I do not know who it was, but it would be fascinating to meet that person.

6.2 CAM-CAD Takes to the Air

Here are pictures of the first known application of CAM-CAD. The first picture is that of the prototype of the 727. Here you can clearly see the centre engine inlet just ahead of the tail plane. Seen from the front it is elliptical, as can be seen from the following sequence of manufacturing stages:- (Images of the manufacturing stages of the 727 engine inlet are shown here)

6.3 An Unanticipated Extra Benefit

One of the immediate, though unanticipated, benefits of CAD was transferring detailed design to subcontractors. Because of our limited manufacturing capacity, we subcontracted a lot of parts, including the rear engine nacelles (the covers) to the Rohr Aircraft Company of Chula Vista in California. When their team came up to Seattle to acquire the drawings, we instead handed them boxes of data in punched card form. We also showed them how to write the programs and feed their NC machinery. Their team leader, Nils Olestein, could not believe it. He had dreamed of the idea but he never thought he would ever see it in his lifetime: accuracy in a cardboard box! Remember that in those days we did not have email or the ability to send data in the form of electronic files.

6.4 Dynamic Changes

The cultural change to Boeing due to the new CAD systems was profound. Later on we acquired a number of drafting machines from the Gerber Company, who now knew that there was to be a market in computer-controlled drafting, and the traditional acres of drafting tables began slowly to disappear. Hand drafting had been a profession since time immemorial. Suddenly its existence was threatened, and after a number of years, it no longer existed. That also goes for architecture and almost any activity involving drawing.

Shortly afterwards, as the idea caught on, people started writing CAD systems which they marketed widely throughout the manufacturing industry as well as in architecture. Eventually our early programs vanished from the scene after being used on the 737 and 747, to be replaced by standard CAD systems marketed by specialist companies. I suppose, though, that even today’s Boeing engineers are unaware of what we did in the early 1960s; generally, corporations are not noted for their memory.

Once the possibility of drawing with the computer became known, the idea took hold all over the place. One of the most fascinating areas was to make movie frames. We already had flight simulation; Boeing ‘flew’ the Douglas DC-8 before Douglas had finished building it. We could actually experience the airplane from within. We did this with analog computers rather than digital. Now, with digital computers, we could look at an airplane from the outside. From drawing aircraft one could very easily draw other things such as motorcars and animated cartoons. At Boeing we established a Computer Graphics Department around 1962 and by 1965 they were making movies by computer. (I have a video tape made from Boeing’s first ever 16mm movie if anyone’s interested.) Although slow and simple by today’s standards, it had become an established activity. The rest is part of the explosive story of computing, leading up to today’s marvels such as computer games, Windows interfaces, computer processing of film and all the other wonders of modern life that people take for granted. From non-existent to all-pervading within a lifetime!

7. The Cosmic Dice

Part of the excitement of this computer revolution that we have brought about in these sixty years was the unexpected benefits. To be honest, a lot of what we did, especially in the early days, was pure serendipity; it looked like a good idea at the time but there was no way we could properly justify it. I think had we had to undertake a solid financial analysis most of the projects would never have got off the ground and the computer industry would not have got anywhere near today’s levels of technical sophistication or profitability. Some of the real payoffs have been a result of the cosmic dice throwing us a seven. This happened already twice with the first 727.

The 727 rolled out in November, 1962, on time and within budget, and flew in April, 1963. The 727 project team were, of course, dead scared that it wouldn’t. But the irony is that it would not have happened had we not used CAD. During the early period, before building the first full-scale mockup, as the computer programs were being integrated, we had a problem fitting the wing to the wing-shaped hole in the body; the wing-body join. The programmer responsible for that part of the body program was yet another Swiss by name Raoul Etter. He had what appeared to be a deep bug in his program and spent a month trying to find it. As all good programmers do, he assumed that it was his program that was at fault. But in a moment of utter despair, as life was beginning to disappear down a deep black hole, he went cap in hand to the wing project to own up. “I just can’t get the wing data to match the body data, and time is no longer on my side.” “Show us your wing data. Hey where did you get this stuff?” “From the body project.” “But they’ve given you old data; you’ve been trying to fit an old wing onto a new body.” (The best time to make a design change is before you’ve actually built the thing!) An hour later life was restored and the 727 became a single numerical entity. But how would this have been caught had we not gone numerical? I asked the project. At full-scale mockup stage, they said. In addition to the serious delay what would the remake have cost? In the region of a million dollars. Stick that in your project analysis!

The second occasion was just days prior to roll-out. The 727 has leading-edge flaps, but at installation they were found not to fit. New ones had to be produced over night, again with the right data. But thanks to the NC machinery we managed it. Don’t hang out the flags before you’ve swept up the final chip.

8. A Fascinating Irony

This discussion is about using the computer to make better pictures of other things. At no time did any of us have the idea of using pictures to improve the way we ran computers. This had to wait for Xerox PARC, a decade or so later, to throw away our punched cards and rub our noses into a colossal missed opportunity. I suppose our only defence is that we were being paid to build airplanes not computers.

9. Conclusion

In summary, CAM came into existence during the late 1950s, catalyzing the advent of CAD in the early 1960s. This mathematical definition of line drawing by computers then radiated out in three principal directions with (a) highly accurate engineering lines and surfaces, (b) faster and more accurate scientific plotting and (c) very high-speed animation. Indeed, the world of today’s computer user consists largely of pictures; the interface is a screen of pictures; a large part of technology lessons at school uses computer graphics. And we must remember that the computers at that time were miniscule compared to the size of today’s PC in terms of memory and processing speed. We’ve come a long way from that 727 wing design.


n5321 | 2025年6月15日 23:23

About Us

普通电机工程师!
从前只想做最好的电机设计,现在修理电机设计工具。
希望可以帮你解释电磁概念,项目救火,定制ANSYS Maxwell。

了解更多