Types of simulation programs


















Purchase of computers and high-fidelity simulation models and maintenance there require considerable funds. Amortization of such equipment will be achieved through an appropriate and rapid training of students and, consequently from the health care provided to patients by well-trained nurses 45 , Familiarization of educators with technology, in general, is a necessary condition for smooth operation of the simulation programme and appropriate training of students.

Being a simulation educator is different from being a professor in a nursing school. However, such distinctions are rarely made and health educators are inadequately trained and have limited skills as a result, such training is ineffective Incomplete training is another significant limitation that may appear in simulation.

A poorly designed scenario may result in negative learning. For example, if certain physical reactions are missing during the simulation process, the students may neglect them and fail to test them.

Frequently, due to time constraints, simulation fails to assess some essential parameters of the health care procedure and communication. Thus, the students fail to ask for or obtain the consent of the patient to the implementation of a medical procedure, or fail to follow basic rules of communication, which are necessary for establishing personal contact and creating a healing environment The attitude of trainees is of great interest.

The participants will always approach a simulator different from when they are in real life. Educators play a significant part in the successful implementation of simulation programmes.

It is not assumed that a nursing professor is identified with a simulation educator. Knowledge of technology and technological applications is necessary for successfully teaching the parameters of nursing science. A study by Simes et al. Structural Barriers including understanding of educational material, access to teaching and learning resources , 3. Human Resource Barriers e.

Suggestions to address barriers they provided suggestions for ways of addressing such barriers, including presence of a mentor, more training in simulation-related issues, carrying out of a rehearsal and creation of backup copies.

In addition, the availability or lack of resources affects the ability of educators to join simulation activities in all courses of study, according to a study by MacKinnon et al Some students report that the role of an educator in simulation programmes is very significant and that it must combine the role of a clinical nurse and that of an educator, because this is the only way to enhance learning and the realism of different scenarios McAllister et al.

These suggestions include: providing assistance to educators in their work, enabling students to have direct access to clinical skills videos, focusing on teaching clinical skills, utilizing teams in the documentation on skill learning, learning communication skills in an entertaining and imaginative way, and improving time management and prioritization of needs for students.

Nurses and their training are fundamental elements of the effectiveness of the system; therefore, special attention is paid and must be paid. Any changes in the training of nurses are interwoven with technological advances, and their training is directly affected by any technological means available for teaching. The use of simulation as an educational strategy represents a great challenge for nursing education.

Simulation may improve health care and patient safety. No patient who is alive is put at risk at the expense of the trainee. Simulation provides standardization of cases, promotes critical thinking, allows supervision of patient care, provides immediate feedback, and helps students to assimilate knowledge and experience.

It is an ideal composition learning experience Probably the greatest change in nursing education is the introduction of virtual simulation. Continuation and development of virtual simulation constitutes a focal point for nursing science and for the progress of nursing students. This requires investment of funds in the establishment of appropriate laboratories by nursing schools, time for simulation as provided for in the curricula, and educators who are properly trained to create various scenarios and operate simulators.

The use of virtual simulation must become a part of the overall simulation programme. Despite the fact that virtual and augmented reality are at a quite early stage, this option will rapidly spread, as soon as simulation-related technology becomes available and affordable.

The quality of simulation devices will provide opportunities for training of students in skills that used to require actual educators in the past, thus opening up new opportunities for schools to reallocate their financial resources. The objective of nursing education, apart from the acquisition of solid theoretical knowledge, is the acquisition of clinical skills, which are necessary for graduate nurses to be promptly integrated into the workforce. Integrated learning, critical thinking, and optimal decision-making skills help nurses to provide quality health care.

This can be achieved through the inclusion of simulation in the education process. Further development of simulation, along with other educational methods may be of great assistance in the attempt made by students to become integrated and successful healthcare professionals. All authors were involved in all steps of preparation this article.

Final proofreading was made by the first author. National Center for Biotechnology Information , U. Journal List Acta Inform Med v. Acta Inform Med. Author information Article notes Copyright and License information Disclaimer. BOX , T. Received Jan 15; Accepted Mar Abstract Background: Simulation constitutes a teaching method and a strategy for learning and understanding theoretical knowledge and skills in the nursing and medical field.

Objective: To review and present modern data related to this issue. Methods: Literature review of data related to the issue derived from Medline, Cinhal, and Scopus databases, in English, using the following keywords: nursing, simulation, simulator, nursing laboratory.

Results: The implementation of simulation enables students to practice their clinical and decision-making skills for some significant issues they may face in their daily work.

Conclusion: The further development of simulation, along with other instructional techniques, can significantly help the efforts made by the students to become integrated and successful healthcare professionals. Keywords: nursing, simulation, simulator, nursing laboratory. It is a teaching method where, following a certain scenario, students experience the actual dimensions of their future professional roles, which helps them to be more quickly integrated into the workforce of the healthcare sector 5 , 6 In nursing science, simulation is used for teaching theoretical and clinical skills, while focusing on the promotion of the critical thinking of students 7 , 8.

Simulation Types in Nursing Education Through the use of simulation, an attempt is made to replace real patients with virtual standardized patients, or technologies and methods capable of reproducing actual clinical scenarios for therapeutic and educational purposes. Use of high-fidelity mannequins or technologies. Low-fidelity mannequins. Partial task simulators. Standardized patients - Volunteers playing the roles of patients.

E learning usually knowledge testing, e. Hybrid Simulation. Benefits of patient simulation in Nursing Education Simulation, as an evidence-based educational technique and process, firstly appeared when it became difficult for nurses working in a hospital to acquire clinical experiences.

Limitations on the use of simulation in Nursing Education There is a widespread use of simulation in nursing schools and it continues to spread, since the benefits are enormous. Especially for constructive simulation models that have relatively fast turnarounds, there is no reason to perform such a limited analysis. External validation and sensitivity analysis are often accompanied by the following activities: 1 model verification, checking to see that the computer code accurately reflects the modeler's specifications; 2 evaluation of model output either for particular inputs or for extreme inputs—where the resulting output can be evaluated using subject-matter expertise; and 3 comparisons of the model output with the output of other models or with simple calculations.

Model verification, of course, is an extremely important activity. The quasivalidation activities 2 and 3 can lead to increased face validity—which we define here as the model's output agreeing with prior subject-matter understanding—which is often worthwhile.

However, it is important to stress that face validity is insufficient. If validation is limited to these latter activities, one can be misled by agreement with preconceived notions or with models that are based on a set of commonly held and unverified assumptions. In addition: there will be little or no support for important feedback loops to help indicate areas in the model in need of improvement, there will be little indication of the quality of the model for predicting various outputs of interest, and it will be impossible to construct hypothesis tests that indicate whether discrepancies with the results of other models or with field results are due to natural variation or to real differences in the model s.

Therefore, face validity must be augmented with more rigorous forms of model validation. In the last years a number of statistical advances have been made that are relevant to the practice of model validation; we have not seen evidence of their use in the validation of constructive simulation models used for defense testing.

We note five of the advances that should be considered:. Morgan and Henrion , McKay , and others have identified techniques for carrying out an uncertainty analysis 4 in order to assist external validation. One-variable-at-a-time sensitivity analysis assumes a linear response surface, is inefficient in its use of model runs, and is too narrowly focused on a central scenario.

These deficiencies can be addressed by using inputs produced by Latin Hypercube sampling McKay et al. To help analyze the resulting paired input-output data set, McKay's ANOVA decomposition of outputs from models run on inputs selected by Latin Hypercube sampling is useful for identifying which outputs are sensitive to which inputs.

The response surface of these input-output vector pairs can be summarized using various non-parametric regression techniques, such as multivariate adaptive regression splines Friedman, For a single output, there are techniques that can identify a small collection of important inputs from a larger collection of candidate inputs.

These techniques can be very helpful in simplifying an uncertainty analysis, a sensitivity analysis, and even an external validation, since it indicates which variables are the crucial ones to vary see Cook, ; Morris, ; McKay, ; and others. One formal approach to the design and analysis of computer experiments has been developed by Sacks et al. Identifying outliers to patterns shown by the great majority of inputs to a constructive simulation can be used to better understand regions of the input output space in which the behavior of the simulation changes qualitatively see, e.

The developers of a constructive simulation often make fairly arbitrary choices about model form. Citro and Hanushek , Chatfield , Draper , and others offer ways of addressing this problem of misspecification of the relationship between inputs and outputs in computer and statistical models, wherein the simulation is less precise than one measures by using the usual "analysis of variance" techniques.

Some of the above ideas may not turn out to be directly applicable to defense models, but the broad collection of techniques being developed to analyze non-military simulations are likely to be relevant. Given the importance of operational testing, testing personnel should be familiar with this literature to determine its value in the validation of constructive simulations. As noted above, a sensitivity analysis is the study of the impact of changes on model outputs from changes in model inputs and assumptions.

An uncertainty analysis is the attempt to measure the total variation in model outputs due to quantified uncertainty in model inputs and assumptions and the assessment of which inputs contribute more or less to total uncertainty. In addition to model validation, a careful analysis of the assumptions used in developing constructive simulation models is a necessary condition for determining the value of the simulation. Beyond the usual documentation, which for complicated models can be fairly extensive, an "executive summary" of key assumptions used in the simulation model should be provided to experts to help them determine their reasonableness and therefore the utility of the simulation.

A full history of model development, especially any modification of model parameters and their justification, should also be made available to those with the responsibility for accrediting a model for use in operational testing. In model-test-model, a model is developed, a number of operational test runs are carried out, and the model is modified by adjusting parameters so that it is more in agreement with the operational test results.

Such external validation on the basis of operational use is extremely important in informing simulation models used to augment operational testing. However, there is an important difference one we suspect is not always well understood by the test community between comparing simulation outputs with test results and using test results to adjust a simulation.

Many complex simulations involve a large number of "free" parameters—those that can be set to different values by the analyst running the simulation. In model-test-model some of these parameters can be adjusted to improve the correspondence of simulation outputs with the particular operational test results with which they are being compared. When the number of free parameters is large in relation to the amount of available operational test data, close correspondence between a "tuned'' simulation and operational results does not necessarily imply that the simulation would be a good predictor in any scenarios differing from those used to tune it.

A large literature is devoted to this problem, known as overfitting. An alternative that would have real advantage would be "model-test-model-test," in which the final test step, using scenarios outside of the "fitted" ones, would provide validation of the version of the model produced after tuning and would therefore be a guard against overfitting. If there was interest in the model being finalized before any operational testing was performed, this would be an additional reason for developmental testing to incorporate various operationally realistic aspects.

Overfitting is said to occur for a model and data set combination when a simple version of the model selected from a model hierarchy, formed by setting some parameters to fixed values is superior in predictive performance to a more complicated version of the model formed by estimating these parameters from the data set. For some types of statistical models, there are commonly accepted measures of the degree of overfitting.

An example is the Cp statistic for multiple regression models: a model with high Cp could be defined as being overfit. Recommendation 9. The panel reviewed several documents that describe the process used to decide whether to use a simulation model to augment an operational test.

There are differences across the services, but the general approach is referred to as verification, validation, and accreditation. Verification is "the process of determining that model implementation accurately represents the developer's conceptual description and specifications" U. Department of Defense, a. For constructive simulations, verification means that the computer code is a proper representation of what the software developer intended; the related software testing issues are discussed in Chapter 8.

Validation is "the process of determining a the manner and degree to which a model is an accurate representation of the real-world from the perspective of the intended uses of the model, and b the confidence that should be placed on this assessment" U.

Accreditation is ''the official certification that a model or simulation is acceptable for use for a specific purpose" U. The panel supports the general goals of verification, validation, and accreditation and the emphasis on verification and validation and the need for formal approval, that is, accreditation, of a simulation model for use in operational testing. Given the crucial importance of model validation in deciding the utility of a simulation for use in operational test, it is surprising that the constituent parts of a comprehensive validation are not provided in the directives concerning verification, validation, and accreditation.

A statistical perspective is almost entirely absent in these directives. For example, there is no discussion of what it means to demonstrate that the output from a simulation is "close" to results from an operational test. It is not clear what guidelines model developers or testers use to decide how to validate their simulations for this purpose and how accrediters decide that a validation is sufficiently complete and that the results support use of the simulation.

Model validation cannot be algorithmically described, which may be one reason for the lack of specific instruction in the directives. A test manager would greatly benefit from examples, advice on what has worked in the past, what pitfalls to avoid, and most importantly, specific requirements as to what constitutes a comprehensive validation. This situation is similar to that described in Chapter 1 , regarding the statistical training of those in charge of test planning and evaluation.

Model validation has an extensive literature, in a variety of disciplines, including statistics and operations research, much of it quite technical, on how to demonstrate that a computer model is an acceptable representation of the system of interest for a. Operational test managers need to become familiar with the general techniques represented in this literature, and have access to experts as needed. We suggest, then, a set of four activities that can jointly form a comprehensive process of validation: 1 justification of model form, 2 an external validation, 3 an uncertainty analysis including the contribution from model misspecification or alternative specifications, and 4 a thorough sensitivity analysis.

All important assumptions should be explicitly communicated to those in a position to evaluate their merit. This could be done in the "executive summary" described above. A model's outputs should be compared with operational experience. The scenarios chosen for external validation of a model must be selected so that the model is tested under extreme as well as typical conditions.

The need to compare the simulation with operational experience raises a serious problem for simulations used in operational test design, but it can be overcome by using operationally relevant developmental test results.

Although external validation can be expensive, the number of replications should be decided based on a cost-benefit analysis see the discussion in Chapter 5 on "how much testing is enough". External validation is a uniquely valuable method for obtaining information about a simulation model's validity for use in operational testing, and is vital for accreditation.

An indication of the uncertainty in model outputs as a function of uncertainty in model inputs, including uncertainty due to model form, should be produced. This activity can be extremely complicated, and what is feasible today may be somewhat crude, but DoD experience at this will improve as it is attempted for more models. In addition, exploration of alternative model forms will have benefits in providing further understanding of the advantages and limitations of the current model and in suggesting modifications of its current form.

An analysis of which inputs importantly affect which outputs, and the direction of the effect, should be carried out and evaluated by those with knowledge of the system being developed. The literature cited above suggests a number of methods for carrying out a comprehensive sensitivity analysis. It will often be necessary to carry out these steps on the basis of a reduced set of "important" inputs: whatever process is used to focus the analysis on a smaller number of inputs should be described.

There are tutorials that are provided at conferences, and other settings, and excellent reports in the DoD community e. A description of any methods used to reduce the number of inputs under analysis should be included in each of the steps. Models and simulations used for operational testing and evaluation must be archived and fully documented, including the objective of the use of the simulation and the results of the validation.

The purpose of a simulation is a crucial factor in validation. For some purposes, the simulation only needs to be weakly predictive, such as being able to rank scenarios by their stress on a system, rather than to predict actual performance. For other purposes, a simulation needs to be strongly predictive. Experience should help indicate, over time, which purposes require what degree and what type of predictive accuracy. Models and simulations are often written in a general form so that they will have wide applicability for a variety of related systems.

A simulation is a representation of the real world on a computer. Software is programs and routines designed to run on computers. Simulation software is the name given to computer software that represents real-world situations and experiences in a computer environment for study, entertainment, projections, increasing efficiency, modeling possible alternatives in advance of a strategic choice, and other reasons. One way of categorizing the different types of simulation software is by the application area of the simulation.

In academic settings, simulation software is used in application areas such as agriculture, business, communications, defense, health, manufacturing, oil terminals, service, traffic, and waste processing.

There are also many programs that can be used for financial and business forecasting, as well as simulation programs for use in medicine. Some computer simulation programs are developed to perform similar functions as other simulators, while also providing interactive or entertainment functionality, such as flight simulator training programs and games.

Computer simulation programs are types of software developed to receive input information, either manually entered or automatically generated through sensors and other devices.

This data is then used to generate a model or mathematical algorithm that can be used to simulate and predict a number of different behaviors and reactions.

These types of programs are used in a wide range of industries and applications, and can vary greatly in terms of complexity and accuracy.



0コメント

  • 1000 / 1000