Physics-based Modeling for Animation and Medical Applications
- 格式:pdf
- 大小:391.94 KB
- 文档页数:4
基于物理模型的窗帘运动实时动画成迟薏1 石教英1徐迎庆2 †沈向洋 21 浙江大学CAD&CG实验室杭州3100272微软中国研究院北京100080Email:摘要:本文提出了一种基于物理模型的实时动态模拟窗帘运动的方法。
在以往的研究中,人们提出了许多模拟织物运动的方法,这些方法的一个主要缺点是复杂度高、计算效率低,无法满足交互设计和实时动画的要求。
本文提出的方法根据织物的物理特性,采用半刚性样条结构,将经线和纬线方向的约束分开处理,大量简化了计算,可以实时地模拟织物的运动。
关键词:基于物理的建模,窗帘,动画,虚拟现实,碰撞检测,半刚体1 前言在人们的生活中,织物无处不在,时时可见。
如何在计算机上有效和逼真地模拟织物的运动一直是计算机图形学研究中的一个具有挑战性的课题。
此外,日益发展的计算机辅助设计及电子商务系统也要求快速、逼真的织物模拟算法,以帮助设计师使用计算机来高效地设计和生产,并把这些产品放到Internet上,使网上选购织物真正成为可能[1]。
传统CAD系统采用刚性模型,能精确模拟机械零件的运动,刚性物体的模拟技术已经非常成熟。
但这种刚性模型对织物并不合适。
作为一种柔性体,在外力作用下,织物会产生很大的形变,不同材料、不同制造方法,乃至不同裁减、不同缝制的衣料的形变各不相同。
科学家和研究人员做了大量的实验和研究,获得了织物在外力作用下形变的精确物理参数,并试图开发了一些模型描述这些形变,但由于织物的内在特性,这些模型并未得到广泛应用,主要的困难在于:1 织物本身的物理机械性能是非常复杂的。
其材料的多样性、结构的复杂性、形状的不规则性、以及个性化产品所产生的密度分布不均匀性等,都给织物的动态模拟带来很大的困难;2 虽然大量的实验给出了精确的织物形变参数,但不同方向,不同性质的力涉及不同的形变规则,如何对这些参数进行总结分类,采用一简化模型概括所有的形变规则,是一个难点。
现有的模型大都比较复杂,即便是在高性能的工作站上,计算一帧图象也需要数秒乃至数天时间,这显然无法满足实时绘制的需要;3 碰撞检测是动画中的经典问题。
<文献翻译:原文>One Maya. More V alue.Autodesk Maya 2010 software is the first release to unify the Maya Complete 2009 and Maya Unlimited 2009 feature sets, advanced matchmoving capabilities , and powerful compositing into a single offering with exceptional value. Producers have become more savvy with respect to computer-generated imagery; they expect more work with additional complexity in less time than ever before. Maya 2010 gives us the total package to efficiently handle any challenge they can throw at us, whether it’s heavy in tracking, modeling, animating, rendering, or compositing.—Paal AnandDigital Post SupervisorBling Imaging(人名和公司名不用翻译) For those looking to create the kind of compelling digital imagery found in Academy Award®−winning films and top-selling games, Autodesk® Maya® 2010 delivers extensive 3D modeling, animation, and rendering toolsets; innovative simulation and compositing technologies; and a flexible software development kit (SDK) and scripting capabilities —making it easier, and more affordable, to create stylistic designs, believable animated characters, and lifelike visual effects.Unbeatable V alueAutodesk Maya 2010 offers an end-to-end computer graphics (CG) workflow based on the award-winning Maya Unlimited 2009 toolset, with its advanced simulation tools for cloth, hair, fur, fluids, and particles. To supplement your creative workflow, we’ve also added t he Maya Composite high dynamic range compositing system, a matchmoving camera tracking system, five additional mental ray® for Maya batch* rendering nodes, and the Autodesk® Backburner™** network render queue manager. Proven SolutionMaya has been a favorite among companies producing top film, games, and television content throughout the world for the last decade Meanwhile, award-winning commercial spot facilities like The Mill and Psyop count Maya among their toolsets, as do top broadcasters such as NBC, Seven Network, and Turner.FeaturesMaya has been at the cutting edge of feature development for over 10 years, and Maya 2010 is no exception. The software is packed with tried and tested features that help speed your project from initial concept to finished renderings: polygon andNURBS modeling, UV mapping and texturing, animation and rigging, dynamic simulation tools, tools for generating plants and other natural detail, in addition to advanced compositing capabilities, and a choice of four built-in renderers, including mental ray.ProductivityIncreased competition for projects and tighter deadlines mean that many jobs require even more high-quality work in less time. Maya 2010 helps maximize productivity with optimized workflows for everyday tasks; opportunities for collaborative, parallel workflows and reuse of assets; and automation of repetitive tasks through scripting. PerformanceThrough a combination of multi-threading, algorithmic tuning, sophisticated memory management, and tools for segmenting scenes, Maya 2010 is engineered to elegantly handle today’s increasingly complex data sets without restricting the creative process. InteroperabilityWhether you are painting textures in Adobe® Photoshop® software, compositing shots in Maya Composite or Autodesk® Flame® software, or bringing in cleaned motion capture data from Autodesk® MotionBuilder® character animation software, Maya 2010 helps to minimize errors and reduce iterations. And, support for the Autodesk® FBX® data interchange technology enables you to reuse assets created outside of Maya in your Maya scenes. Maya also offers an SDK to assist with pipeline integration.ExtensibilityMaya is an out-of-the-box solution, but for companies that want to integrate it with their pipelines, or to add new features, Maya offers avenues for customization. Built from the ground up with its own embedded scripting language, Maya Embedded Language (MEL), Maya 2010 also offers Python® scripting and an extensive, well-documented C++ application programming interface (API).Platform of ChoiceWhether you use a Windows®, Mac®, or Linux® operating system, Maya 2010 runs on your platform of choice. And it’s ready to handle the large amounts of memory that today’s large scenes require, with64-bit executables for both Windows and Linuxoperating systems.Advanced Simulation ToolsEvery license of Maya 2010 now includes the innovative Maya Nucleus unified simulation framework and the first two fully integrated Nucleus modules—Maya nCloth and Maya nParticles—as well as Maya Fluid Effects, Maya Hair, and Maya Fur. These widely used, production-proven toolsets for simulating cloth, fluids, hair, and fur enable you to more efficiently create the types of sophisticated effects audiences crave, without additional software investment.High-Performance CompositingMaya Composite brings high-performance, high dynamic range (HDR) compositing to Maya 2010. The comprehensive Maya Composite toolset gives you keying, tracking, color correction, rotoscoping, paint, and warping tools; advanced filters (including motion blur and depth of field); a full 3D compositing environment; and support for stereoscopic production. Available on the same choice of platforms as Maya, this node-based compositor provides you with a high-efficiency, collaborative compositing environment.Professional Camera TrackingA crucial tool for any leading visual effects production work, Autodesk® MatchMover™software makes high-quality 3D camera tracking accessible within Maya. Using this toolset, you can extract accurate 3D camera and motion data from video and film sequences so you can insert your Maya elements seamlessly into the footage. MatchMover combines automatic tracking capabilities with the precision manual controls professionals demand.Augmented Rendering PowerWith five additional mental ray for Maya batch rendering nodes, you can now use a network of computers to render sequences faster. The Backburner network render queue manager is also included with Maya 2010, to help you manage the process; or simply integrate the additional mental ray for Maya nodes with your existing render management software.<文献翻译:译文>同一个Maya,更多的价值Autodesk Maya 2010软件是首次发布的统一了Maya Complete 2009和Maya Unlimited 2009特性的集合,先进的镜头跟踪能力和强有力的合成一个单一的具有额外价值的供给能力。
福建师范大学智慧树知到“视觉传达设计”《计算机辅助设计—MAYA》网课测试题答案(图片大小可自由调整)第1卷一.综合考核(共15题)1.MAYA软件共分为6个模块,它们是Animation,Modleing,Dynamics,Rendering,和()。
A.ClothB.HypershadeC.LiveD.Create2.动力学播放方式描述错误的是()。
A.Maya是逐帧进行动力学计算的B.动力学中,不必须使用逐帧播放才能得到正确的解算结果C.物体在每帧的状态是依据前帧的结果来计算的D.动力学中,不必须使用逐帧播放3.不同的动力场可以影响粒子产生不同的运动,下列对场描述正确的是()。
A.Maya中所有的动力场都可以使静止的粒子产生运动B.能够使粒子向下运动的动力场只有GravityC.为粒子添加场,必须先选择粒子,再执行Fields菜单中的动力场命令,如果先将场创建出来,那么场就不能对粒子产生影响D.Gravity[重力场]和Newton[牛顿场]都是模拟引力作用的动力场,所以在Maya中Newton也可以作为Gravity使用4.以下哪个属性可以控制材质的透明度?()A.ColorB.Ambient ColorC.TransparencyD.Bump Mapping5.Maya中File(文件)菜单中“New Scene”命令的含义是()。
A.建立新项目B.制定一个项目C.建立新场景D.导出所有6.在Maya中NURBS模型可以转化为Polygons多边形模型。
()A.正确B.错误7.下面哪些项目属于MAYA编辑器?()A.General EditorsB.Relationship EditiorsC.Attribute EditorD.Hypershade8.在Maya中只为对象所有的移动属性设置关键帧,下列操作中最简单便捷的是()。
A.选择对象,单击键盘上的W键B.选择对象,在所有移动属性上单击S键,添加关键帧C.选择对象,在所有的移动属性上单击鼠标右键,选择Key AllD.选择对象,在所有的移动属性上单击鼠标右键,选择Mute All9.Hair不可以直接在Maya的Mental Ray中进行渲染。
计算机辅助几何造型技术主讲教师:秦开怀教授、博导qkh-dcs@所在单位:清华大学计算机科学与技术系 时间:2007年9月~2008年1月Textbooks/ReferencesJ. Hoschek& D. Lasser, Fundamentals of Computer Aided Geometric Design A K Peters Computer Aided Geometric Design, A K Peters, Ltd, Massachusetts, 1993.David F Rogers Introduction to NURBS Morgan David F Rogers,Introduction to NURBS, Morgan Kaufmann,2001.L Piegl&W Tiller The NURBS Book(2L. Piegl & W. Tiller, The NURBS Book (2nd Edition), Springer-Verlag Berlin Heidelberg, NewYork, 1997.York1997Carl deBoor, A Practical Guide to Splines, New York, Springer Verlag, 1978.York Springer-Verlag1978(Continued)M. E. Mortenson, Geometric Modeling , J h W l &S I 1985John Waley & Sons, Inc., 1985. G. Farin, Curves and Surfaces for ,Computer Aided Geometric Design (5th Edition), Elsevier Inc., 2002.(李双喜译,),,(CAGD 曲线曲面,科学出版社,2006)E J Stollnitz T DeRose &D H Salesin E. J. Stollnitz, T. DeRose & D. H. Salesin, Wavelets for Computer Graphics, Theory & Morgan Kaufmann PublishersApplications , Morgan Kaufmann Publishers, Inc., San Francisco, 1996.(Continued)Denis Zorin & Peter Schroder, Subdivision for M d li d A i ti SIGGRAPH 2000Modeling and Animation , SIGGRAPH 2000 Course Notes #23, 2000. R. Barzel, Physically-Based Modeling for Computer Graphics, A Structured Approach,Academic Press, Inc., San Diego, 1992.D. N. Metaxas, Physic-Based Deformable ,yModels, Applications to Computer Vision, Graphics & Medical Imaging , Kluwer Academicp g g ,Publishers, Massachusetts, 1997.(Continued)Donald Hearn & M.Pauline Baker, C t G hi ith O GL (Thi d Computer Graphics with OpenGL (Third Edition), Pearson Education, 2004 (中译本赫恩等著本:赫恩等著, 蔡士杰等译,《计算机图形学(第三版)》, 电子工业出版社, 200506)2005-06.) J. D. Foley, et al, Computer Graphics: y,,p pPrinciples & Practice (2nd Edition in C),Addison-Wesley, Reading, MA, 1996.y,g,,G di P li Grading PolicyThree assignments 30%Discussions/learning in classroom 5% One project substituting for the final p j g examination 65%R kRemarksThe three assignment is to be completed individually on yourself, but discussions among fellow students areyourself but discussions among fellow students areallowed.The project substitutes for the final examination Two The project substitutes for the final examination. Twostudents can work together as a group.Absolutely no sharing or copying of any code for both Absolutely no sharing or copying of any code for boththe assignments and the project! Offenders will be givena failure grade and the case will be reported to theg pdepartment.You are welcome to turn off your mobile phone before You are welcome to turn off your mobile phone beforeattending lectures.This course concentrates on seven main issues:i iNURBS curves and surfaces (including Bezier, B-spline curves and surfaces)gTriangular surfacesGordon-Coons surfacesSubdivision surfaces of arbitrary topologySubdivision surfaces of arbitrary topologyThe 2nd generation wavelets for multi-resolution modelingmodelingSolid modelingNew technology for geometric modelingContents of This Course1.Introduction2.∆Mathematic BasicsAffine mapsAffine mapsDivided differenceFunction spaceGeometric basics from curves and surfaces 3.∆Interpolatory Polynomial SplinesHermite interpolationHermite interpolationContents of This Course Contents of This Course (Continued)Quadric polynomial spline curvesCubic polynomial spline curvesSolving a linear system of equations with a g y q tridiagonal coefficient matrix Cubic parametric spline curves Cubic parametric spline curves4.*Bezier Curves and Surfaces Bezier curves defined by edge vectorsBernstein-Bezier curvesProperties of Bernstein-Bezier curves(Continued)De Casteljau algorithmDi t ti f B iDiscrete generation of Bezier curvesDegree elevation of Bezier curvesD d i f B iDegree reduction of Bezier curvesBezier spline curvesBezier interpolation curvesMatrix formula of Bezier curvesRational Bezier curvesProduct & inner product of Bezier curves Bezier surfaces(Continued)5.*B-spline Curves and SurfacesB-spline basis functions and their p ppropertiesB-spline curvesOpen curves and knot vectorsOpen curves and knot vectorsUniform B-spline curvesEndpoint interpolating B spline curves Endpoint interpolating B-spline curvesClosed B-spline curves(Continued)Chaikin algorithmDe Boor algorithmInserting knots in B-spline curves Inserting knots in B spline curvesBoehm algorithmOlso algorithmGeneral knot insertion for B-spline curvesDegree elevation of B-spline curves Degree elevation of B-spline curvesMarsden identity and recursive degree elevationPrautzsch algorithm(Continued)Arbitrarily high degree elevation for B-spline curvesDegree reduction of B-spline curvesB-spline surfacesInterpolating B-spline curves and p g p surfaces Matrix formulas of B-spline curves and Matrix formulas of B spline curves and surfaces(Continued)Matrix formula of uniform B_spline curvesMatrix formula of non-uniform B_splines Inner product of B-spline curvesGeneralized Marsden identityB-spline curve productInner product of B-spline basis functionsInner product of B-spline curves6.*NURBS Curves and SurfacesNURBS curvesNURBS curvesRepresenting conics using NURBS(Continued)Parameterization of curvesfNURBS surfacesRepresenting quadrics using NURBS surfacesfInterpolating NURBS curves and surfaces 7.Blossoming PrincipleLooking at de Casteljau algorithm from a Looking at de Casteljau algorithm from a blossoming point of viewKnot insertion from a blossoming point of Knot insertion from a blossoming point of view(Continued)Generating de Boor points based on the blossoming principleblossoming principleDegree raising of B-spline curves by blossoming8.* Triangular SurfacesBarycentric coordinatesgTriangular Bezier surfacesContinuity conditions for triangular Bezier ppatchesRational Triangular surfaces(Continued)9.*Gordon-Coons SurfacesCoons surfacesGordon-Coons surfaces on rectanglesGordon-Coons surfaces on triangles0Subd s o Su a s o b a y 10.*Subdivision Surfaces of ArbitraryTopologyCatmull-Clark surfacesCatmull-Clark surfacesDoo-Sabin surfacesContinuity of uniform subdivision surfaces Continuity of uniform subdivision surfacesNon-uniform subdivision surfaces(Continued)Convergence and continuity of non-uniform subdivision surfaces11.*The 2nd Generation Wavelets forMulti-resolution modelingMulti-resolution modelingB-spline wavelets for Multi-resolution modeling Endpoint interpolating B-spline wavelets Endpoint interpolating B-spline waveletsArbitrary Non-uniform B-spline waveletsB-spline wavelets with constraintsB spline wavelets with constraintsSubdivision-based Surface waveletsLoop Subdivision WaveletsCatmull-Clark Subdivision Wavelets√3-subdivision-based Bi-orthogonal Wavelets(Continued)12.∆Scattered Data Interpolation13.*Intersections of Curves and Surfaces14.Solid Modeling14*Solid Modeling15.Parameterization Modeling for ShapeDesign and Feature-based Modeling 16.New Technology for Geometric 16.*New Technology for GeometricModelingHierarchical B splinesHierarchical B-splinesPhysics-based modelingContents of This Course Contents of This Course (Continued)Modeling fractalized scenes (mountains,f lowers etc.)Particle system for modeling fires, clouds, water, forests etc.1.Introduction1. IntroductionSome Applications of CAGDRepresentation of large data setsVisualizing productsAutomatically producing sectionalAutomatically producing sectional drawingsModeling surfaces arising inModeling surfaces arising in construction of cars, ships & airplanesDesigning pipe systems, e.g. in chemical plants(continued)Drawing marine charts and city and relief i h maps in cartographyProduction and quality control, e.g. in q y ,g the sewing machine, textile and shoe industriesPlanning and controlling surgery Creating images in advertising television Creating images in advertising, television and film industries(continued)Constructing virtual environmentsDescribing robot paths and controlling their movementstheir movementsControlling milling machines used in manufacturingCurve modeling with constrained B-spline wavelets 保特征点的多分辨率曲线造型29曲线的多分辨率分段无缝表示30细分曲面带约束的样条曲面小波左图是采用经典B 样条曲面小波分片多分辨率表示的结果,右图是采用带约束B 的样条曲面小波分片多分辨率表示的结果,其中约束施加在接合线处。
摘要随着计算机多媒体和网络技术的迅速发展,人们对各种人机交互界面的人性化要求越来越高。
三维人脸合成技术作为虚拟人合成技术的一个重要组成部分,近30年来在动画工作者们的不懈努力下取得了长足的进步和发展。
真实感三维人脸合成技术的应用范围非常广泛,其成果可以应用于虚拟主持人、虚拟会议、辅助教学、医疗研究、电影制作、游戏娱乐等等,具有重大的理论意义和应用价值。
本文在三维人脸建模、基于MPEG-4的人脸动画、人脸动画重定位、细微纹理动画、人脸动画驱动等人脸合成的关键问题上展开研究,取得如下成果:1)在三维人脸建模方面,如何便捷、快速、准确的构造三维人脸模型一直是计算机动画工作者在努力解决的难题。
本文提出一种从二维人脸照片到三维人脸模型的构造方法。
该方法输入简单,只需要一张正面人脸照片;建模速度快,比国际上同类的三维重构方法快15倍左右;整个重构过程完全自动。
该技术还可以应用到基于合成的人脸识别方法中,扩充了训练数据,能够在识别算法不变的前提下,大幅提高人脸识别准确率。
2)MPEG-4的三维人脸动画框架具有通用性强,便于扩展、数据量小等特点。
本文提出了一种构造特定人脸模型FAT(facial animation table,人脸动画表)的方法,可以驱动本文的标准模型以及与标准模型拓扑结构相同的人脸模型。
3)动画重定位(animation retargeting)是近年来国际上热门研究课题。
本文在已有标准模型FAT的基础上,提出了一种构造任意拓扑结构三维人脸模型FAT的方法,从而实现了人脸动画数据的复用,并解决了驱动任意人脸模型的难题。
该方法只需要在模型上提取若干特征点就可以应用,具有自动化程度高,速度快,通用性强等特点。
4)传统的人脸动画只采用一张固定的纹理,纹理上的褶皱无法表现。
本文提出一种PERI(partial expression ratio image,局部表情比率图)的概念,并建立了PERI库,利用PERI包含细微纹理特征的特性来表示纹理上的皱褶。
3d美术英语词汇The realm of 3D art encompasses a vast and intricate vocabulary, each term serving as a building block in the creation of captivating digital worlds. As the field of 3D art continues to evolve, mastering this lexicon becomes paramount for both aspiring and seasoned artists alike. In this essay, we will delve into the key terminologies that shape the language of 3D art, exploring their significance and applications within the creative process.At the foundation of 3D art lies the concept of modeling, the process of creating three-dimensional digital representations of objects, characters, or environments. The primary building blocks of modeling are vertices, points in 3D space that define the shape of a mesh. These vertices are connected by edges, forming the wireframe structure that outlines the form. Faces, the polygonal surfaces between these edges, give the model its solid appearance and texture.The manipulation of these vertices, edges, and faces is the domain of mesh editing, where artists sculpt and refine the digital form.Techniques such as extrusion, which extends faces to create new volume, and subdivision, which increases the resolution of a mesh, allow for the intricate shaping of complex shapes. Smoothing operations, like subdivision surface modeling, create organic, flowing forms, while Boolean operations, such as union and difference, enable the combination and subtraction of shapes.Closely tied to the modeling process is the concept of UV mapping, the process of unwrapping a 3D model's surface onto a flat, two-dimensional texture. This mapping allows artists to apply detailed textures and patterns to the model, bringing it to life with color, depth, and visual interest. The UV coordinates, which correspond to specific points on the 3D mesh, serve as a roadmap for the texture artists to follow.Once the model is created and textured, the next step is to imbue it with movement and animation. This is where the principles of rigging and skinning come into play. Rigging involves the creation of a skeletal system within the 3D model, consisting of joints and bones that mimic the underlying structure of the subject. Skinning, on the other hand, is the process of binding the mesh to the rig, allowing the model to deform and move naturally as the rig is animated.The art of animation itself encompasses a wide range of techniques and terminologies. Key frames, the specific points in time where theanimator defines the position and movement of the model, form the foundation of animation. In-betweening, the process of generating the intermediate frames between key frames, creates the illusion of smooth, continuous motion. Pose-to-pose animation, where the artist focuses on defining key poses and allowing the software to generate the in-betweens, contrasts with straight-ahead animation, where the movement is created frame by frame.Lighting, a crucial aspect of 3D art, also has its own specialized vocabulary. Ambient light, the overall illumination of a scene, sets the mood and atmosphere, while directional lights, such as the sun, cast shadows and create depth. Spot lights and point lights, with their focused beams and radial falloff, allow artists to highlight specific areas and create dramatic lighting effects. The concept of light mapping, the baking of lighting information into a texture, enables efficient and realistic lighting in real-time 3D applications.Closely related to lighting is the realm of materials and shaders, which define the surface properties of 3D objects. Diffuse, the base color of a material, interacts with light to create the object's primary appearance. Specular highlights, the bright reflections on shiny surfaces, add depth and realism. Roughness and glossiness determine the smoothness or grittiness of a material, while normal maps and displacement maps add intricate surface details.The final stage of the 3D art process is rendering, the act of generating the final image or animation from the 3D scene. Rendering engines, such as Unreal Engine and Unity, utilize various algorithms and techniques to translate the digital scene into a visually stunning output. Terms like ray tracing, which simulates the behavior of light, and global illumination, which accounts for the indirect lighting in a scene, are essential to understanding the rendering process.Beyond the technical aspects of 3D art, the industry also has its own set of specialized roles and workflows. Concept artists, who create the initial visual ideas and designs, work in tandem with 3D modelers, who bring those concepts to life. Texture artists, responsible for creating the detailed surface patterns, collaborate with lighting artists, who fine-tune the illumination of the scene. Riggers and animators work together to bring characters and objects to life, while technical artists bridge the gap between the creative and the technical, ensuring the seamless integration of all the elements.In conclusion, the vocabulary of 3D art is a rich and multifaceted language, encompassing a wide range of terms and concepts that are essential to the creation of captivating digital worlds. From the fundamental building blocks of modeling to the advanced techniques of lighting and rendering, each term serves as a tool in the artist's arsenal, enabling them to bring their visions to life withprecision and artistry. As the field of 3D art continues to evolve, mastering this lexicon becomes increasingly important, allowing artists to communicate effectively, collaborate seamlessly, and push the boundaries of what is possible in the digital realm.。
Geometric ModelingGeometric modeling is a fundamental concept in the field of computer graphics and computer-aided design (CAD). It involves creating digital representations of objects and scenes using mathematical and computational techniques. Geometric modeling plays a crucial role in various industries, including architecture, engineering, manufacturing, and entertainment. This technology enables designers and engineers to visualize and analyze complex shapes and structures, facilitating the creation of innovative and efficient products and designs. One of the key aspects of geometric modeling is the representation of 3D objects in a virtual environment. This process involves defining the shape, size, and position of the objects using mathematical equations and algorithms. There are various techniques for representing 3D objects, including boundary representation (B-rep), constructive solid geometry (CSG), and polygonal mesh. Each technique has its own advantages and limitations, and the choice of representation depends on thespecific requirements of the application. Geometric modeling also encompasses the manipulation and transformation of 3D objects. This includes operations such as translation, rotation, scaling, and deformation, which are essential for designing and engineering applications. These operations are typically performed using geometric transformations, which involve applying mathematical formulas to the vertices and edges of the 3D objects. Geometric transformations are the foundation of many advanced modeling techniques, such as parametric modeling and procedural modeling, which enable designers to create complex and realistic shapes with ease. In addition to representing and manipulating 3D objects, geometric modeling also involves the creation of surfaces and curves. This is essential for generating smooth and continuous shapes, such as those found in industrial design, automotive design, and animation. There are various mathematical techniques for defining surfaces and curves, including B-splines, NURBS (non-uniform rational B-splines), and Bezier curves. These techniques provide a high degree of flexibility and precision, allowing designers to create organic and aesthetically pleasing shapes. Furthermore, geometric modeling plays a crucial role in simulation and analysis applications. By creating accurate digital representations of physical objects and environments, engineers and scientists can simulate real-world scenarios andanalyze the behavior of complex systems. This is particularly important in fields such as structural engineering, fluid dynamics, and virtual prototyping, where the ability to model and analyze the behavior of physical objects is essential for innovation and problem-solving. Moreover, geometric modeling is also integral to the field of 3D printing and additive manufacturing. The ability to create precise and detailed digital models is essential for producing high-quality physical objects using 3D printing technology. Geometric modeling enables designers to create intricate and customized designs that can be manufactured with a high degree of accuracy and efficiency. This has revolutionized the manufacturing industry, allowing for the production of complex and lightweight components that were previously impossible to create using traditional manufacturing methods. In conclusion, geometric modeling is a versatile and essential technology that has revolutionized the way we design, engineer, and manufacture products and structures. Its impact is evident across a wide range of industries, from architecture and engineering to entertainment and manufacturing. By enabling the creation of accurate digital representations of 3D objects and environments, geometric modeling has facilitated innovation, efficiency, and creativity in countless applications. As technology continues to advance, the role of geometric modeling will only become more significant, driving further advancements in design, engineering, and manufacturing.。
ddpm原理(一)DDPM系统原理解析1. 什么是DDPMDDPM(Diffusion Deformable Progressive Meshes)是一种基于渐进网格细化的形变网格表示方法。
它被广泛应用于计算机图形学、动画和形变建模领域。
2. DDPM系统的基本原理2.1 渐进网格细化DDPM系统采用渐进网格细化的方法来表示形变物体。
它将物体表示为一个层次结构,每一层都包含了更详细的形状信息。
通过逐层细化网格,DDPM系统实现了更高级别的细节表示。
2.2 形变建模DDPM系统通过在不同层次的网格之间进行网格形变来实现物体的形变建模。
它可以准确地捕捉物体的形变信息,并实现高效的形变动画效果。
2.3 网格拓扑保持DDPM系统在形变过程中能够保持网格的拓扑结构。
这意味着形变后的网格仍然具有与原始网格相同的连接关系,从而避免了形变后产生的不连续问题。
3. DDPM系统的工作过程3.1 网格初始化DDPM系统首先从一个初始网格开始,该网格由一组离散的顶点和面片组成。
这个初始网格通常是一个简化后的低细节模型。
3.2 渐进细化DDPM系统通过逐层细化网格来增加物体的细节。
在每个细化级别,系统会根据一些细化准则选择一部分网格面片进行细化,并通过插值来生成新的顶点。
这样,每个细化级别都包含了比上一级更详细的形状信息。
3.3 形变建模在形变过程中,DDPM系统通过改变顶点的位置来实现网格的形变。
系统会根据形变需求对顶点进行位移,并通过对周围顶点的影响来实现平滑的形变效果。
3.4 网格拓扑保持DDPM系统在形变过程中通过保持网格的拓扑结构来避免不连续问题。
它会根据对顶点位置的修改来调整相邻面片的连接关系,从而使形变后的网格仍然保持与原始网格相同的拓扑结构。
4. DDPM的优势和应用4.1 优势•DDPM系统能够准确地捕捉物体的形变信息,实现高质量的形变效果。
•DDPM系统采用渐进细化的方法,可以根据需求细化网格,并根据需要选择不同细节级别的模型。
Physically Based Animation and Rendering of LightningTheodore Kim Ming C.LinDepartment of Computer ScienceUniversity of North Carolina at Chapel Hillkim,lin@/LIGHTNINGAbstractWe present a physically-based method for animating andrendering lightning and other electric arcs.For the simula-tion,we present the dielectric breakdown model,an elegantformulation of electrical pattern formation.We then extend the model to animate a sustained,‘dancing’electrical arc, by using a simplified Helmholtz equation for propagating electromagnatic waves.For rendering,we use a convolu-tion kernel to produce results competitive with Monte Carlo ray stly,we present user parameters for manipu-lation of the simulation patterns.1.IntroductionThe forked tendrils of electrical discharge have a long history as a dramatic tool in the visual effects industry.From the genesis of the monster in the1931movie Frankenstein, to the lightning from the Emperor’sfingers in Return of the Jedi,to the demolition of the Coliseum by lightning in last year’s The Core,lightning is an ubiquitous effect in science fiction and fantasyfilms.Despite the popularity of this effect,there has been rela-tively little research into physically-based modeling of this phenomenon.The existing research is largely empirical,es-sentially generating a random tree-like structure that quali-tatively resembles lightning.The previous work is also lim-ited to briefflashes of lightning,and provides no method for animating a dancing,sustained stream of electricity.How-ever,modeling the fractal geometry of electrical discharge and similar patterns has attracted much attention in physics. To our best knowledge,our algorithm is thefirst rigorous, physically-based modeling of lightning in computer graph-ics.We also believe our approach is accurate enough that its applications extend beyond visual effects to more physically demanding applications,such as commercialflight simula-tion.Main Contributions:In this paper,we present a physically-based algorithm to simulate lightning,and propose a novel extension for animation of continuous elec-trical streams.The simulation results are then rendered using an efficient convolution technique.The result-ing image quality rivals that of Monte Carlo ray tracing. Lastly,we present user parameters for intuitive manipu-lation of the simulation.Our approach offers the follow-ing:A physically-inspired approach based on the dielectricbreakdown model for electrical discharge;A novel animation technique for sustained electricalstreams that solves a simplified Helmholtz equation for propagating electromagnetic waves;A fast,accurate rendering method that uses a convolu-tion kernel to describe light scattering in participating media;A parameterization that enables simple artistic controlof the simulation.Organization:The rest of the paper is organized as fol-lows.A brief survey of related work is presented in Sec.2. In Sec.3,we briefly summarize the physics of lightning for-mation.We present the original dielectric breakdown model as well as our proposed extension in Sec.4.A efficient ren-dering method is present in er parameters are pre-sented in Sec.6,followed by implementation details and discussion in Sec.7.Finally,conclusions and possible di-rections for future work are given in Sec.8.2.Previous WorkReed and Wyvill present a lightning model based on the empirical observation that most lightning branches deviate by an average of16degrees from parent branches[14].A set of randomly rotated line segments are then generated with their angles normally distributed around16degrees. In subsequent work,modifications are made to this random line segment model.Glassner[6]performs a second passon the segments to add“tortuosity”,and Kruszewski[9]re-places the normal distribution with a more easily controlled randomized binary tree.Notably,Sosorbaram et al.[16]use the dielectric break-down model(DBM)to guide the growth of a random line segment tree with a local approximation of the potential field.But,their approach does not appear to implement full DBM,as it does not solve the full Laplace equation. Electric discharges are neither solid,liquid,or gas,but instead are the fourth phase of matter,plasma.It is a light source with no resolvable surface,so traditional rendering techniques are not directly applicable.To address this prob-lem,Reed and Wyvill[14]describe a ray tracing extension for both a lightning bolt and its surrounding glow.Alterna-tively,[16]proposes rendering3D textures.Dobashi,Ya-mamoto,and Nishita[4]provide the most rigorous treat-ment of the problem byfirst presenting the associated vol-ume rendering integral,and then presenting an efficient,ap-proximate solution.In electrical engineering,there are three popular mod-els of electric discharge:gas dynamics[5],electromagnet-ics[1],and distributed circuits[2].However,none of these are directly applicable to visual simulation,as they respec-tively approximate the electricity as a cylinder of plasma,a thin antenna,and two plates in a circuit.3.The Physics of Electric DischargeWe classify the physics literature into two categories. Thefirst deals with the physical,experimentally observed properties of lightning and related electrical patterns.A good survey of this approach is given by Rakov and Uman [13].The second is a more qualitative approach that char-acterizes the geometric,fractal properties of electric dis-charge.A good survey of this approach is given by Vicsek [17].3.1.Physical PropertiesElectrical discharge occurs when a large charge differ-ence exists between two objects.For lightning,the case is usually that the bottom of a cloud has a strong nega-tive charge and the ground possesses a relatively positive charge.Electrons possess negative charge,the charge differ-ence is then equalized when electrons are transferred from the cloud to the ground in the form of lightning.This case is referred to as‘downward negative lightning’.While other types can exist,downward negative lightning accounts for 90percent of all cloud-to-ground lightning.For illustrative purposes,we will show here how to simulate this most com-mon type of lightning.But,it should be noted that we can handle the other types of lightning by trivially manipulat-ing the charge configuration.Lightning is actually composed of several bolts,or ‘strokes’in rapid succession.Thefirst stroke is referred to as the stepped leader.The subsequent strokes,called dart leaders,tend to follow the general path of the pre-vious leaders,and do not exhibit as much branching as the stepped leader.We note that the random line seg-ment approach of previous work in computer graphics does not provide a clear method of simulating dart lead-ers.But,such a method is crucial for simulating sustained electric arcs,which are essentially stepped leaders fol-lowed by a large number of dart leaders.Lightning is initiated in clouds by an event known as the initial breakdown.During the initial breakdown,the con-ductivity in a small column of air jumps several orders of magnitude,effectively transforming the column from an in-sulator(or dielectric)to a conductor.Charge thenflows into the newly conductive air.Another breakdown then occurs somewhere along the perimeter of the newly charged air. This chain of events repeats,forming a thin,tortuous path through the air,until the charge reaches the ground.3.2.Geometric PropertiesThe physical processes that give rise to the breakdown are still not well understood.However,a great deal of progress has been made in characterizing the geometric shape that the breakdown ultimately produces.Electric dis-charge has been observed to have a fractal dimension of ap-proximately1.7[11].Many disparate natural phenomena share this same fractal dimension,including ice crystals, lichen,and fracture patterns.Collectively,all the patterns that share these fractal properties are known as Laplacian growth phenomena.There are three techniques for simulating Laplacian growth:Diffusion Limited Aggregation[18],the Dielec-tric Breakdown Model[11],and Hastings-Levitov confor-mal mapping[8].All three produce qualitatively similar re-sults.We elect to use the Dielectric Breakdown Model here because it gives the closest correspondence to the phys-ical system being simulated and allows the addition of natural,physically intuitive user controls.4.The Dielectric Breakdown ModelThe Dielectric Breakdown Model,or DBM,wasfirst de-scribed by Niemeyer,Pietronero,and Wiesmann[11],and is also sometimes referred to as the model.Wefirst present the model described in the original paper,and then propose a modification to simulate dart leaders and sustained elec-tric arcs.(b)Lightning configuration Figure1.Different charge configurations for simulation.Grey:;Black:(2)where is a cell in the list of adjacent cells,and is the total number of cells in the list.The term is a user parameter that will be discussed in section6.Subsequent iterations proceed by solving the Laplace equation again over the2D domain,and again selectinga(b)(c)growth site according to Eqn2.The iterations are repeated until the user obtains the desired results.The technique gen-eralizes trivially to three dimensions by simply solving the 3D Laplace equation.The classic configuration produces a radial discharge,as shown in Figure2(a).In order to produce lightning-like pat-terns,we instead use the initial configuration shown in Fig-ure1(b).We start with a small amount of negative charge at the top of the3D domain,representing an initial branch of lightning.The bottom edge of the domain represents the ground,and is thus set to positive charge.The remaining grid edges are again set to.The results of running the simulation on this initial configuration with different are shown in Figures2(b)-2(d).4.2.A Poisson Growth ModelOnce we have formed an initial stepped leader,we would like to have a method for generating subsequent dart leaders that follow the same general path.Since the path changes slightly with each successive dart leader,a large number ofdart leaders will produce the‘dancing’effect present in a sustained electric arc.We hypothesize that the reason that a dart leader follows the same general path as a stepped leader is because there exists residual positive charge along the old leader chan-nel that attracts the new dart leader.In order to simulate this behavior,we need a method of introducing residual charge into the simulation.While DBM can simulate many different kinds of natu-ral phenomena,we observe that for the case of electricity, the Laplace equation can be viewed as a special case of the Helmholtz equation for propagating electromagnetic waves (Eqn3).term is ignored.As lightning bolts have a linear velocity that already approaches the speed of light,the angular component should be negligible.So,if we continue to ignore the relativistic term but re-introduce the charge density term,the electromagnetic Poisson equation is obtained:(4) If we now solve this equation in place of the Laplace equa-tion,we can produce the desired dart leader behavior.The value of is determined by a second grid of values in space that is initially set to zero.This essentially reduces Eqn.4to the Laplace equation for the initial iteration.After we gen-erate ourfirst bolt,we deposit charge along the leader chan-nel by setting in the cells along the channel to a positive value.When generating subsequent bolts,the new values will automatically attract the new bolt to the old path.Af-ter each new bolt is generated,we clear the previousfield and repopulate it with charges along the new leader chan-nel.Fortunately,because the Poisson and Laplace equations are very similar,the only implementation overhead required for our modified model is a minor change to the residual cal-culation in the conjugate gradient solver.It is worth noting that a similar model has been proposed in the physics liter-ature[12]which also accounts for inhomogeneous dielec-tric permittivities.Our model was developed independently. For efficient visual rendering,we choose to ignore inhomo-geneity and treat air as a homogeneous media.5.RenderingFor the rendering of electricity,we borrow the method of Narasimhan and Nayar[10].In the paper,analytical mod-els are obtained that reduce the rendering of certain types of participating media to a2D convolution.The results are competitive with expensive Monte Carlo techniques such as photon mapping,but run in seconds instead of hours.We willfirst summarize the pertinent formulae from[10],then describe how we use it to generate a convolution kernel,and finally show how we render electricity.5.1.Atmospheric Point Spread FunctionThe convolution kernel produced by the method of[10] is called an Atmospheric Point Spread Function,hereon re-ferred to as an APSF.The APSF is a series expansion of the Henyey-Greenstein phase function,a popular function for describing the scattering of light in participating media. The basis functions used are Legendre polynomials,whose series form are shown in Eqn.5.(9)Again a base case is necessary:.The variable is the scattering parameter from the Henyey-Greenstein phase function.Increasing from0to1increases the den-sity of the medium,and can be thought of as transitioning the weather from clear skies to rain.The optical thickness, ,is equal to,where is the radial distance from the viewer,and is the extinction coefficient of air.Finally is the cosine of the radial direction from the source.5.2.Generating a Convolution KernelThe APSF is a three dimensional function that describes how much light is reaching any point in space around a point light source.If we can determine how a single pointlight spreads out on the image plane,we can then use this point spread function as a convolution kernel to render a light source of arbitrary shape.Assume we want to generate an resolution convo-lution kernel of physical size.We sample the APSF according to the geometry in Figure3.In thisfigure,we as-sume a pinhole camera model where meters, about the width of an eyeball.We also assume the light-ning stroke is two kilometers away:.We treat as a user parameter that allows control over the width of the ‘glow’around the stroke.In order to compute the value at each point on the kernel,we need to determine a value at each sample. If we assume the point light source projects onto the center of the kernel,the value at kernel sample follows by trigonometry(Eqns.10-12).Locating the secondary and side channels is more in-volved.Every node adjacent to the main channel that is not on the main channel forms the root to a new tree.Within each such tree,the charge selects a preferred path that be-comes the bright secondary channel.There is a poverty of theories on how this path is selected;perhaps the path that had the largest potential differences during the breakdown process is selected.For aesthetic effect,we set the path with the greatest number of nodes as the secondary chan-nel.Off of this longest secondary channel,we also add other ‘long’paths according to a user-defined cutoff.This tech-nique maximizes the length of the dramatic,snaking ten-drils that surround the central channel.All the remaining edges are now considered to be side channels.We must now assign a wattage to each edge.While there exists some data on the wattage of the main channel(Be-tween Watts/m and Watts/m according to[13]),we have been unable tofind data on the wattage of secondary or side channels.We have attempted to estimate the wattages by deconvolving photographs of lightning,but this method requires a high dynamic range image of light-ning that can resolve the bleached portion of the stroke,as well as the APSF values corresponding to the scene.We in-stead used heuristic values that brought us into close quali-tative agreement with photographs.We rendered the line segments and convolved them with the APSF settings given in Table1.The resulting image was then composited into a raytraced rendering of the remain-ing scene objects.We do not set the main channel to the wattage given by[13],because in the absence of tone map-ping,this step would bleach the entire scene.The applica-tion of tone mapping to lightning rendering is discussed in our future work.n q T4,5,7 1.0200200640.9 1.1er ControlsOur modified DBM permits user control through four pa-rameters:an variable to control the‘branchiness’of the stream,a charge densityfield to control the path of the stream,a boundary condition to repel the stream,and an overall charge configuration to control where the stroke be-gins and ends.The effect of the variable in Eqn.2can be seen in Fig-ure2(b)-2(d).At,dense branching is observed.Asincreases,the density of the branching decreases.Hast-ings observes that at,the stream transitions into a non-fractal,one-dimensional curve[7].So,the domain of the parameter is effectively in the range of.A phys-ical interpretation of is not entirely clear,it can perhaps be viewed as the amount of resistance that the air offers to the process of dielectric breakdown.As is a2Dfield representing the image plane,the user can‘paint’into it any desired charge distribution.The light-ning stroke will then be attracted to this painted path as de-scribed in Section4.2.In addition to attracting the electric arc,the user may want to repel the arc from certain regions.For instance, there may be an obstacle in the scene that the user does not want the arc to intersect.This effect can be achieved by setting the interior of the obstacle to.This sets the charge of the object to the same charge as the arc,caus-ing the obstacle to repel the arc.However,we must then be careful in our implementation not to add grid cells adja-cent to the obstacle to the list of candidate growth sites in Eqn.2.Finally,we have only shown two charge configurations: the circle in Figure1(a),and the lightning configuration in Figure1(b).However,arbitrary charge configurations also produce electric arcs.The arc can begin from any arbitrarily shaped negative region,and terminate at a positive object. In this way,it is possible to construct an arc between any two objects in an arbitrary scene.7.Implementation and ResultsWe have implemented our algorithm in C++.We ran sim-ulations for several scenes on a2.66GHz Xeon processor. Unless otherwise noted,all simulations were performed on a grid with andwhile avoiding the two beams in the center.In Figure5,we demonstrate how the user can attract the bolt to an arbitrary object.The magenta electrode in the center is set to a nega-tive charge,and blue ball is set to a positive charge.As the blue ball moves,the electric arc follows.In Figure7,we an-imate a dancing electric arc between two electrodes.In Fig-ure6,we validate our results by comparing our renderings with a photograph.The scene was simulated on a grid.8.Conclusion and Future WorkWe have presented a physically based algorithm for the simulation,animation,and rendering of sustained electric arcs.We believe that our approach is the most rigorous, physically consistent method available up to date.However, there are several areas for refinement.Primarily,the simulation can be very slow.For large2D and3D grids,the computation time can take hours.But,it is unclear if other Laplacian growth methods,such as DLA or Hastings-Levitov conformal mapping,can give superior performance while preserving the same level of control. While our rendering method is physically consistent,it would be more realistic to use some sort of tone mapping operator to bring the luminance values back into the range of the display device.No operator was used here because we were unsure which would be appropriate.In the tone mapping literature,a‘bright’object is usually daylight or a lightbulb,so it is unclear if some of these methods would break down in the presence of luminance values many or-ders of magnitude brighter.While the use of the convolution kernel generates im-pressive results,there are still some unresolved issues.It as-sumes the scattering medium is homogeneous,so it does not explicitly handle the effects of either internal obstacles or clouds.A scene requiring a volume caustic still needs a Monte Carlo renderer.The approach described in[4]ap-pears to be the best solution for a scene containing clouds. While an analytical solution may also be possible for these cases,one has not yet been found.Finally,we have only presented one type of Lapla-cian growth:electric placian growth encompasses many disparate phenomena,including ice formation,ma-terial fracture,lichen growth,tree growth,liquid surface tension,vasculature patterns,river formation,and even ur-ban sprawl.Modeling of Laplacian growth is well worth exploring for visual simulation of natural phenomena.AcknowledgementsThe authors would like to thank Srinivasa Narasimhan for his help with the APSF,and the anonymous reviewers for their help in improving this manuscript.The photo in Figure6is c Warren Faidley/ and is used with permission.This work was supported in part by Army Research Office,Intel Corporation,National Science Foun-dation,and Office of Naval Research.References[1]Y.Baba and M.Ishii.Numerical electromagneticfield anal-ysis of lightning current in tall structures.IEEE Trans.Pow.Rel.,16:324–328,2001.[2]C.Baum and L.Baker.Analytic return-stroke transmission-line model.Electromagnetics,7:205–228,1987.[3]J.Demmel.Applied Numerical Linear Algebra.SIAM,1997.[4]Y.Dobashi,T.Yamamoto,and T.Nishita.Efficient render-ing of lightning tacking into account scattering effects due to clouds and atmospheric particles.Proc.of Pacific Graph-ics,2001.[5]E.Dubovoy,M.Mikhailov,A.Ogonkov,and V.Pryazhin-sky.Measurement and numerical modeling of radio sound-ing reflection from a lightning channel.J.Geophys.Res., 100:1497–1502,1995.[6]A.Glassner.The digital ceraunoscope:Synthetic thunderand lightning.Technical Report MSR-TR-99-17,Microsoft Research,1999.[7]M.Hastings.Fractal to nonfractal phase transition in the di-electric breakdown model.Physical Review Letters,87(17), 2001.[8]M.Hastings and placian growth as one-dimensional turbulence.Physica D,116:244–252,1998. [9]P.Kruszewski.A probabilistic technique for the syntheticimagery of puters and Graphics,1999. [10]S.Narasimhan and S.Nayar.Shedding light on the weather.Proceedings of IEEE CVPR,2003.[11]L.Niemeyer,L.Pietronero,and H.J.Wiesmann.Fractal di-mension of dielectric breakdown.Physical Review Letters, 52:1033–1036,1984.[12]M.Noskov,V.Kukhta,and V.Lopatin.Simulation of theelectrical discharge development in inhomogeneous insula-tors.Journal of Physics D,28:1187–1194,1995.[13]V.Rakov and M.Uman.Lightning:Physics and Effects.Cambridge University Press,2003.[14]T.Reed and B.Wyvill.Visual simulation of lightning.Proc.of SIGGRAPH,1994.[15]J.R.Shewchuk.An introduction to the conjugate gradi-ent method without the agonizing pain.Technical report, Carnegie Mellon University,1994.[16]B.Sosorbaram,T.Fujimoto,K.Muraoka,and N.Chiba.Visual simulation of lightning taking into account cloud puter Graphics International,2001.[17]T.Vicsek.Fractal Growth Phenomena.World Scientific,1992.[18]T.Witten and L.Sander.Diffusion-limited aggregation,a kinetic critical phenomenon.Physical Review Letters,47(19):pp.1400–1403,1981.Figure6.Validation Left:Photograph Right:Rendering。
历年Siggraph会议论文2历年Siggraph会议论文2SIGGRAPH 2002 papers on the webPage maintained by Tim Rowley. If you have additions or changes, send an e-mail.Note that when possible I link to the page containing the link to the actual PDF or PS of the preprint. I prefer this as it gives some context to the paper and avoids possible copyright problems with direct linking. Thus you may need to search on the page to find the actual document.ChangelogACM Digital Library: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques Images and VideoVideo Matting of Complex ScenesYung-Yu Chuang, Aseem Agarwala, Brian Curless (University of Washington), David H. Salesin (University of Washington and Microsoft Research), Richard Szeliski (Microsoft Research) Gradient Domain High Dynamic Range CompressionRaanan Fattal, Dani Lischinski, Michael Werman (The Hebrew University)Fast Bilateral Filtering for the Display of High Dynamic Range ImagesFrédo Durand, Julie Dorsey(Massachusetts Institute of Technology)Photographics Tone Reproduction for Digital ImagesErik Reinhard, Michael Stark, Peter Shirley (University of Utah), Jim Ferwerda (Cornell University)Transferring Color to Greyscale ImagesTomihisa Welsh, Michael Ashikhmin, Klaus Mueller(Stony Brook University)Modeling and SimulationCHARMS: A Simple Framework for Adaptive Simulation (PDF) Eitan Grinspun (California Institute of Technology), Petr Krysl (University of California, San Diego), Peter Schröder(California Institute of Technology)Graphical Modeling and Animation of Ductile FractureJames F. O'Brien, Adam W. Bargteil (University of California, Berkeley), Jessica K. Hodgins (Carnegie Mellon University) Creating Models of Truss Structures With OptimizationsJeffry Smith, Jessica K. Hodgins, Irving Oppenheim (Carnegie Mellon University), Andrew Witkin (Pixar Animation Studios)A Procedural Approach to Authoring Solid ModelsBarbara Cutler, Julie Dorsey, Leonard McMillan, Matthias Mueller, Robert Jagnow (Massachusetts Institute of Technology) GeometryCut-and-Paste Editing of Multiresolution Surfaces (abstract) Henning Biermann(New York University), Ioana Martin, Fausto Bernardini (IBM T.J. Watson Research Center), Denis Zorin (New York University)Pointshop 3D: An Interactive System for Point-Based Surface EditingMatthias Zwicker, Mark Pauly, Oliver Knoll, Markus Gross (Eidgenössische Technische Hochschule Zürich)Level Set Surface Editing OperatorsKen Museth, David E. Breen(California Institute of Technology), Ross T. Whitaker (University of Utah), Alan H. Barr (California Institute of Technology)Dual Contouring of Hermite DataTao Ju, Frank Losasso, Scott Schaefer, Joe Warren(Rice University)Parameterization and MeshesInteractive Geometry RemeshingPierre Alliez(University of Southern California and INRIA), Mark Meyer(California Institute of Technology), Mathieu Desbrun (University of Southern California)Geometry ImagesXianfeng Gu, Steven Gortler(Harvard University), Hugues Hoppe (Microsoft Research)Least Squares Conformal Maps for Automatic Texture Atlas GenerationBruno Levy(INRIA Lorriane), Sylvain Petitjean, Nicolas Ray (CNRS), Jerome Maillot (Alias|Wavefront)Progressive and Lossless Compression of Arbitrary Simplicial ComplexesPierre-Marie Gandoin, Olivier Devillers(INRIA Sophia-Antipolis)Linear Combination of TransformationsMarc Alexa (Technische Universtat Darmstadt)Character AnimationTrainable Videorealistic Speech AnimationTony Ezzat, Gadi Geiger, Tomaso Poggio(Massachusetts Institute of Technology, Center for Biological and Computational Learning)Turning to the Masters: Motion Capturing CartoonsChristoph Bregler, Lorie Loeb, Erika Chuang, Hrishikesh Deshpande (Stanford University)Synthesis of Complex Dynamic Character Motion From Simple AnimationsC. Karen Liu, Zoran Popovic (University of Washington)Integrated Learning for Interactive Synthetic CharactersBruce Blumberg, Marc Downie, Yuri Ivanov, Matt Berlin, Michael Patrick Johnson, William Tomlinson(Massachusetts Institute of Technology, The Media Laboratory)3D Acquisition and Image Based RenderingImage-Based 3D Photography Using Opacity HullsWojciech Matusik(Massachusetts Institute of Technology), Hanspeter Pfister (Mitsubishi Electric Research Laboratory), Addy Ngan(Massachusetts Institute of T echnology), Paul Beardsley (Mitsubishi Electric Research Laboratory), Leonard McMillan (Massachusetts Institute of Technology)Real-Time 3D Model AcquisitionSzymon Rusinkiewicz(Princeton University), Olaf Hall-Holt, Marc Levoy (Stanford University)Light Field Mapping: Efficient Representation and Hardware Rendering of Surface Light Fields (project page)Wei-Chao Chen (University of North Carolina at Chapel Hill), Radek Grzeszczuk, Jean-Yves Bouguet (Intel Corporation) Feature-Based Light Field Morphing (PDF)Baining Guo(Microsoft Research China), Zhunping Zhang (Tsinghua University), Lifeng Wang, Heung-Yeung Shum (Microsoft Research China)Animation From Motion CaptureMotion Textures: A Two-Level Statistical Model for Character Motion Synthesis (PDF)Yan Li, Tianshu Wang, Heung-Yeung Shum (Microsoft Research China)Motion GraphsLucas Kovar, Michael Gleicher(University of Wisconson-Madison), Fred Pighin (USC Institute for Creative Technologies) Interactive Motion Generation From Examples (PDF)Okan Arikan, D.A. Forsyth (University of California, Berkeley) Interactive Contol of Avatars Animated With Human Motion DataJehee Lee, Jinxiang Chai (Carnegie Mellon University), Paul S.A. Reitsma(Brown University), Jessica K. Hodgins(Carnegie Mellon University), Nancy S. Pollard (Brown University) Motion Capture Assisted Animation: T exturing and Synthesis Katherine Pullen, Christoph Bregler (Stanford University) Lighting and AppearanceHomomorphic Factorization of BRDF-Based Lighting ComputationLutz Latta, Andreas Kolb(University of Applied Sciences Wedel)Frequency Space Environment Map RenderingRavi Ramamoorthi, Pat Hanrahan (Stanford University)Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting EnvironmentsPeter-Pike Sloan(Microsoft Research), Jan Kautz(Max-Planck-Institut für Informatik), John Snyder (Microsoft Research) Interactive Global Illumination in Dynamic ScenesParag Tole, Fabio Pellacini, Bruce Walter, Donald P. Greenberg (Cornell University)A Lighting Reproduction Approach to Live-Action CompositingPaul Debevec, Chris Tchou (USC Institute for Creative Technologies), Andreas Wenger (Brown University), Tim Hawkins, Andy Gardner, Brian Emerson (USC Institute for Creative Technologies), Ansul Panday (University of Southern California)Shadows, Translucency, and VisibilityPerspective Shadow MapsMarc Stamminger, George Drettakis(REVES/INRIA Sophia-Antipolis)A User Interface for Interactive Cinematic Shadow DesignFabio Pellacini, Parag Tole, Donald P. Greenberg(Cornell University)Robust Epsilon VisibilityFlorent Duguet, George Drettakis(REVES/INRIA Sophia-Antipolis)A Rapid Hierarchical Rendering Technique for Translucent MaterialsHenrik Wann Jensen(Stanford University), Juan Buhler (PDI/DreamWorks)Soft ThingsDyRT: Dynamic Response Textures for Real Time Deformation Simulation with Graphics HardwareDoug L. James, Dinesh K. Pai (University of British Columbia) Interactive Skeleton-Driven Dynamic DeformationsSteve Capell, Seth Green, Brian Curless, Tom Duchamp, Zoran Popovic (University of Washington)Robust Treatment of Collisions, Contact, and Friction for Cloth AnimationRobert Bridson, Ronald Fedkiw(Stanford University), John Anderson (Industrial Light & Magic)Stable but Responsive ClothKwang-Jin Choi, Hyeong-Seok Ko (Seoul National University) Humans and AnimalsArticulated Body Deformation From Range Scan DataBrett Allen, Brian Curless, Zoran Popovic(University ofWashington)Interactive Multi-Resolution Hair Modeling and EditingTae-Yong Kim, Ulrich Neumann(University of Southern California)Modeling and Rendering of Realistic Feathers (PDF)Yanyun Chen, Yingquing Xu, Baining Guo, Heung-Yeung Shum (Microsoft Research China)Eyes AliveSooha P. Lee (University of Pennsylvania), Jeremy B. Badler (The Smith-Kettlewell Eye Research Institute), Norman I. Badler (University of Pennsylvania)Physiological Measures of Presense in Virtual Environments Michael Meehan, Brent Insko, Mary Whitton, Frederick P. Brooks, Jr. (University of North Carolina at Chapel Hill) Texture SynthesisSynthesis of Bidirectional Texture Functions on Arbitrary Surfaces (PDF)Xin T ong (Microsoft Research), Jingdan Zhang (Tsinghua University), Ligang Liu (Microsoft Research), Xi Wang (Tsinghua University), Baining Guo, Heung-Yeung Shum (Microsoft Research China)Jigsaw Image MosaicsJunhwan Kim, Fabio Pellacini (Cornell University)Self-Similarity Based Texture EditingStephen Brooks, Neil Dodgson (University of Cambridge)Hierarchical Pattern MappingCyril Soler, Marie-Paule Cani, Alexis Angelidis (IMAGIS-GRAVIR)Improving NoiseKen Perlin (New York University)Graphics HardwareSAGE Graphics Architecture (XVR-4000 White Paper)Michael F. Deering, David Naegle (Sun Microsystems, Inc.) Chromium: A Stream Processing Framework for Interactive Rendering on Clusters (project page)Greg Humphreys, Mike Houston, Yi-Ren Ng(Stanford University), Randall Frank, Sean Ahern (Lawrence Livermore National Laboratory), Peter Kirchner, Jim Klosowski(IBM Research)Ray Tracing on Programmable Graphics HardwareTimothy J. Purcell, Ian Buck (Stanford University), William R. Mark(Stanford University[now at NVIDIA]), Pat Hanrahan (Stanford University)Shader-Driven Compilation of Rendering Assets (PDF hosted locally at author's request)Paul Lalonde, Eric Schenk (Electronic Arts (Canada) Inc.) Fluids and FirePhysically Based Modeling and Animation of FireDuc Nguyen, Ronald Fedkiw, Henrik Wann Jensen (Stanford University)Structural Modeling of Natural Flames (PDF hosted locally at author's request)Arnauld Lamorlette, Nick Foster (PDI/DreamWorks)Animation and Rendering of Complex Water SurfacesDouglas P. Enright, Steve Marschner, Ronald Fedkiw (Stanford University)Image Based Flow VisualizationJarke J. van Wijk (T echnische Universiteit Eindhoven) Painting and Non-Photorealistic GraphicsWYSIWYG NPR: Drawing Strokes Directly on 3D ModelsRobert D. Kalnins, Lee Markosian(Princeton University), Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee(Brown University), Philip L. Davidson, Matthew Webb(Princeton University), John F. Hughes (Brown University), Adam Finkelstein (Princeton University)Octree TexturesDavid Benson, Joel Davis (Industrial Light & Magic)Painting and Rendering Textures on Unparameterized Models (PDF)David (grue) DeBry, Jonathan Gibbs, Devorah DeLeon Petty, Nate Robins (Thrown Clear Productions)Stylization and Abstraction of PhotographsDoug DeCarlo, Anthony Santella (Rutgers University)Object-Based Image Editing (thesis)William Barrett, Alan Cheney (Brigham Young University)。
网络是怎样工作的在计算机领域中,网络就是用物理链路将各个孤立的工作站或主机相连在一起,组成数据链路,从而达到资源共享和通信的目的。
接下来小编为大家整理了网络是怎样工作的。
希望对你有帮助哦!The World Wide Web,the graphical portion of the Internet,is the most popular part of the Internet by far. Once you spend time on the Web,you will begin to feel like there is no limit to what you can discover. The Web allows rich and diverse communication by displaying text,graphics,animation,photos,sound and video.So just what is this miraculous creation? The Web physically consists of your personal computer,web browser software, a connection to an Internet service provider,computers called servers that host digital data and routers and switches to direct the flow of information.The Web is known as a client-server system. Your computer is the client; the remote computers that store electronic files are the servers. Here's how it works:Let's say you want to pay a visit to the the Louvre museum website. First you enter the address or URLof the website in your web browser (more about this shortly)。
2020(Sum. No 207)2020年第03期(总第207期)信息通信INFORMATION & COMMUNICATIONS两自由度机械臂动力学模型的建模与控制王磊,陈辰生,张文文(同济大学中德学院,上海202001)摘要:机器人系统建模在布局评估、合理性研究、动画展示以及离线编程等方面有越来越广的应用。
文章对两个自由度 机械臂基于拉格朗日动力学方程,进行建模。
通过建立的模型,分析了重力对两自由度机械臂的影响以及在重力作用下不在稳定位置的机械臂的运动轨迹。
基于机械臂的数学模型,基于Simulink 仿真环境,建立机械臂的仿真模型。
采用逆 动力学方法对机械臂进行控制,观察其对机械臂的控制效果⑴。
通过仿真建模,可以了解机械臂动力学模型以及机械臂动态模型的控制问题。
关键词:动力学模型;数学模型推导;机器人建模;重力分析;逆动力学控制中图分类号:TP241 文献标识码:A 文章编号:1673-1131(2020 )03-0040-03The simulation and control of two ・degree-of freedom robot armWang Lei, Chen Chensheng, Zhang Wenwen(Sino German College of Tongji University, Shanghai 201804)Abstract: The simulation of robot systems is becoming very popular, it can be used for layout evaluation, feasibility studies, presentations with animation and off-line programming 121. In this paper, two degrees of freedom manipulators are modeled based on Lagrange^ dynamic equation. Through the established model, the influence of g ravity on the two-degree-of-freedom manip ulator and the trajectory of the manipulator that is not in a stable position under the action of gravity are analyzed. Based on the mathematical model of the robotic arm and the Simulink simulation environment, a simulation model of the robotic arm is es tablished. The inverse dynamics method was used to control the manipulator, and the control effect on the manipulator was observed. Through simulation modeling, you can understand the dynamics model of the robotic arm and the control problems of the dynamic model of t he robotic arm.Key words: dynamic model; mathematical model derivation; robot modeling; gravity analysis; inverse dynamic control0引言机器人学是一门特殊的工程科学,其中包括机器人设计、建模、控制以及使用。
COMPUTER ANIMATION AND VIRTUAL WORLDSComp.Anim.Virtual Worlds 2004;15:95–108(DOI:10.1002/cav.8)******************************************************************************************************Fast and learnable behavioral andcognitive modeling for virtual character animationBy Jonathan Dinerstein*,Parris K.Egbert,Hugo de Garis and Nelson Dinerstein************************************************************************************Behavioral and cognitive modeling for virtual characters is a promising field.It significantly reduces the workload on the animator,allowing characters to act autonomously in abelievable fashion.It also makes interactivity between humans and virtual characters more practical than ever before.In this paper we present a novel technique where an artificial neural network is used to approximate a cognitive model.This allows us to execute the model much more quickly,making cognitively empowered characters more practical for interactive applications.Through this approach,we can animate several thousand intelligent characters in real time on a PC.We also present a novel technique for how a virtual character,instead of using an explicit model supplied by the user,can automatically learn an unknown behavioral/cognitive model by itself through reinforcement learning.The ability to learn without an explicit model appears promising for helping behavioral and cognitive modeling become more broadly accepted and used in the computer graphics community,as it can further reduce the workload on the animator.Further,it provides solutions for problems that cannot easily be modeled explicitly.Copyright #2004John Wiley &Sons,Ltd.Received:May 2003;Revised:September 2003KEY WORDS :computer animation;synthetic characters;behavioral modeling;cognitivemodeling;machine learning;reinforcement learningIntroductionVirtual characters are an important part of computer graphics.These characters have taken forms such as synthetic humans,animals,mythological creatures,and non-organic objects that exhibit lifelike properties (walking lamps,etc).Their uses include entertainment,training,and simulation.As computing and rendering power continue to increase,virtual characters will only become more commonplace and important.One of the fundamental challenges involved in using virtual characters is animating them.It can often be difficult and time consuming to explicitly define all aspects of the behavior and animation of a complex virtual character.Further,the desired behavior may be impossible to define ahead of time if the character’s virtual world changes in unexpected or diverse ways.For these reasons,it is desirable to make virtual char-acters as autonomous and intelligent as possible while still maintaining animator control over their high-level goals.This can be accomplished with a behavioral model :an executable model defining how the character should react to stimuli from its environment.Alternatively,we can use a cognitive model :an executable model of the character’s thought process.A behavioral model is reactive (i.e.,seeks to fulfill immediate goals),whereas a cognitive model seeks to accomplish long-term goals through planning :a search for what actions should be performed in what order to reach a goal state.Thus a cognitive model is generally considered more powerful than a behavioral one,but can require significantly more processing power.As can be seen,behavioral and cognitive modeling have unique strengths and weak-nesses,and each has proven to be very useful for virtual character animation.However,despite the success of these techniques in certain domains,some important arguments have been brought against current behavioral and cognitive mod-eling systems for autonomous characters in computer graphics.******************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.*Correspondence to:Jonathan Dinerstein,Brigham Young University,3366TMCB,Provo,UT 84602,USA.E-mail:jondinerstein@First,cognitive models are traditionally very slow to execute,as a tree search must be performed to formulate a plan.This speed bottleneck requires the character to make suboptimal decisions and limits the number of virtual characters that can be used simultaneously in real time.Also,since a search of all candidate actions throughout time is performed,it is necessary to use only a small set of candidate actions(which is not practical for all problems,especially those with continuous action spaces).Note that behavioral models are currently more popular than cognitive models,partially because they are usually significantly faster to execute.Second,for some problems,it can be very difficult and time consuming to construct explicit behavioral or cog-nitive models(this is known as the curse of modeling in the artificial intelligencefield).For example,it is not uncommon for behavioral/cognitive models to require weeks to design and program.Therefore,it would be extremely beneficial to have virtual characters be able to automatically learn behavioral and cognitive models if possible,alleviating the animator of this task.In this paper,we present two novel techniques.In the first technique,an artificial neural network is used to approximate a cognitive model.This allows us to exe-cute our cognitive model much more quickly,making intelligent characters more practical for interactive ap-plications.Through this approach,we can animate several thousand intelligent characters in real time on a PC.Further,this approach allows us to use optimal plans rather than suboptimal plans.The second technique we introduce allows a virtual character to automatically learn an unknown behavioral or cognitive model through reinforcement learning.The ability to learn without an explicit model appears pro-mising for helping behavioral and cognitive modeling become more broadly used in the computer graphics community,as this can further reduce the workload on the animator.Further,it provides solutions for problems that cannot easily be modeled explicitly.In summary,this paper presents the following origi-nal contributions:*a novel technique for fast execution of a cognitivemodel using neural network approximation;*a novel technique for a virtual character to auto-matically learn an approximate behavioral or cogni-tive model by itself(we call this offline character learning).We present each of these techniques in turn.We begin by surveying related work.We then give a brief introduction to cognitive modeling(as it is less well known than behavioral modeling)and neural networks.Next we present our technique for using neural networks to ra-pidly approximate cognitive models.We then give a brief introduction to reinforcement learning,and then present our technique for offline character learning.Next we present our experience with several experimental appli-cations and the lessons learned.Finally,we conclude with a summary and possible directions for future work.Related W orkPrevious computer graphics research in the area of autonomous virtual characters includes automatic gen-eration of motion primitives.1–7This is useful for redu-cing the work required by animators.More recently, Faloutsos et al.8present a technique for learning the preconditions from which a given specialist controller can succeed at its task,thus allowing them to be combined into a general-purpose motor system for physically based animated characters.Note that these approaches to motor learning focus on learning how to move to minimize a cost function(such as the energy used). Therefore,these techniques do not embody the virtual characters with any decision-making abilities.However, these techniques can be used in a complementary way with behavioral/cognitive modeling in a multilevel animation system.In other words,a behavioral/cogni-tive model makes a high-level decision for the character (e.g.,‘walk left’),which is then carried out by a lower-level animation system(e.g.,skeletal animation).A great deal of research has also been performed in control of animated autonomous characters.9–12These techniques have produced impressive results,but are limited in two aspects.First,they have no ability to learn,and therefore are limited to explicit prespecified behavior.Secondly,they only perform behavioral con-trol,not cognitive control(where behavioral means re-active decision making and cognitive means reasoning and planning to accomplish long-term tasks).Online behavioral learning has only begun to be explored in computer graphics.13–15A notable example is Blumberg et al.,16where a virtual dog can be interactively taught by the user to exhibit desired behavior.This technique is based on reinforcement learning and has been shown to work extremely well.However,it has no support for long-term reasoning to accomplish complex tasks.Also, since these learning techniques are all designed to be used online,they are(for the sake of interactive speed) limited in terms of how much they can learn.To endow virtual characters with long-term reason-ing,cognitive modeling for computer graphicswasJ.DINERSTEIN ET AL.************************************************************************************************************************************************************************************************************ Copyright#2004John Wiley&Sons,Ltd.96Comp.Anim.Virtual Worlds2004;15:95–108recently introduced.17Cognitive modeling can provide a virtual character with enough intelligence to automa-tically perform long-term,complex tasks in a believable manner.The techniques we present in this paper build on the successes of traditional behavioral and cognitive modeling with the goal of alleviating two important weaknesses:performance of cognitive models,and time-consuming construction of explicit behavioral and cognitive models.We will first present our technique for speeding up cognitive model execution through approx-imation.We will briefly review cognitive modeling and neural networks,and then present our new technique.Introduction to Cognitive ModelingCognitive modeling 17–20is closely related to behavioral modeling,but is less well known,so we now provide a brief introduction.A cognitive model defines what a character knows,how that knowledge is acquired,and how it can be used to plan actions.The traditional approach to cognitive modeling is a symbolic approach.It uses a type of first-order logic known as ‘the situation calculus’,wherein the virtual world is seen as a se-quence of situations,each of which is a ‘snapshot’of the state of the world.The most important component of a cognitive model is planning.Planning is the task of formulating a se-quence of actions that are expected to achieve a goal.Planning is performed through a tree search of all candidate actions throughout time (see Figure 1).How-ever,it is usually cost prohibitive to plan all the way to the goal state.Therefore,any given plan is usually only a partial path to the goal state,with new partial plans formulated later on.The animator has high-level control over the virtual character since she can supply it with a goal state.Note that to achieve real-time performance it is necessary to have the goal hard-coded into the cognitive model.This is because it is necessary to implement custom heuristics to speed up the tree search for planning (for further details see Funge et al.17).Therefore,either an animator and programmer must collaborate,or the programmer must also be the animator.This traditional symbolic approach to cognitive model-ing has many important strengths.It is explicit,has formal semantics,and is both human readable and executable.It also has a firm mathematical foundation and is well established in Al theory.However,it also has somesignificant weaknesses with respect to application in computer graphics animation.Since planning is per-formed through a tree search,and the branching factor is the number of actions to consider,the set of candidate actions must be kept very small if real-time performance is to be achieved.Also,to keep real-time performance,we are limited to short (suboptimal)plans.Another performance problem that is unique to computer graphics is the fact that the user may want to have many intelligent virtual characters interacting in real time.In most situa-tions,on a commodity PC,this is impossible to achieve with the traditional symbolic approach to planning.An-other limitation is that it is not possible to have a virtual character automatically learn a cognitive model by itself (which could further reduce the workload on the anima-tor,and provide solutions to very difficult problems).Introduction to Artif|cialNeural NetworksNote that there are many machine learning techniques,many of which could be used to approximate an explicit cognitive model.However,we have chosen to use neural networks because they are both compact and computationally efficient.In this section we briefly review a common type of artificial neural network.22A more thorough introduction can be found in Grzeszczuk et al.5There are many libraries and applications publicly available*(free and commercial)for constructing and executing artificial neuralnets.Figure 1.Planning is performed with a tree search of all candidate actions throughout time.To perform planning in real time without dedicated hardware,it is usually necessary to greatly limit the number of candidate actions and to onlyformulate short (suboptimal)plans.*For example,SNNS (rmatik.uni-tuebingen.de/pub/SNNS)and Xerion (/pub/xerion).FAST AND LEARNABLE BEHAVIORAL AND COGNITIVE MODELING************************************************************************************************************************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.97Comp.Anim.Virtual Worlds 2004;15:95–108A neuron can be modeled as a mathematical operator that maps R p !R .Consider Figure 2(a).Neuron j re-ceives p input signals (denoted s i ).These signals are scaled by associated connection weights w ij .The neuron sums its input signalsz j ¼w 0j þX p i ¼1s i w ij ¼u Áw jwhere u ¼½1;s 1;s 2;...;s p is the input vector and w j ¼½w 0j ;w 1j ;...;w pj is the connection weight vector.The neuron outputs a signal s j ¼g ðz j Þ,where g is an activation function:s j ¼g ðz j Þ¼1=ð1þe Àz j ÞA feedforward artificial neural network (see Figure 2b),also known simply as a neural net,is a set of interconnected neurons organized in yer l receives inputs only from the neurons of layer l À1.The first layer of neurons is the input layer and the last layer is the output layer .The intermediate layers are called hidden layers .Note that the input layer has no functionality,as its neurons are simply ‘containers’for the network inputs.A neural network ‘learns’by adjusting its connection weights such that it can perform a desired computa-tional task.This involves considering input–output ex-amples of the desired functionality (or target function ).The standard approach to training a neural net is the backpropagation training algorithm.23Note that it has been proven that neural networks are universal function approximators (see Hornik et al.24).An alternative approach that we considered was to use the continuous k-nearest neighbor algorithm.21Unlike neural nets,k -nearest neighbor provides a local approx-imation of the target function,and can be used auto-matically without the user carefully selecting inputs.Also,k -nearest neighbor is guaranteed to correctly re-produce the examples that it has been provided (whereas no such guarantee exists with neural nets).However,k -nearest neighbor requires the explicit sto-rage of many examples of the target function.Becauseof this storage issue,we opted to use a neural net approach.Fast Animation Using Neural Network Approximation ofCognitive ModelsThe novel technique we now present is analogous to how a human becomes an expert at a task.As an example,let’s consider typing on a computer keyboard.When a person first learns how to type,she must search the keyboard with her eyes to find every key she wishes to press.However,after enough experience,she learns (i.e.,memorizes)where the keys are.Thereafter,she can type more quickly,only having to recall where the keys are.There is a strong parallel between this example and all other tasks humans perform.After enough experi-ence we no longer have to implicitly ‘plan’or ‘search’for our actions;we simply recall what to do.In our technique,we use a neural net to learn (i.e.,memorize)the decisions made through planning by a cognitive model to achieve a goal.Thereafter,we can quickly recall these decisions by executing the trained neural net.Training is done offline and then the trained network is used online.Thus,we can achieve intelligent virtual characters in real time using very few CPU cycles.We now present our technique in detail,first discuss-ing the structure of our technique,followed by how to train the neural network,and then finally how to use the trained network in practice.StructureA cognitive model with a goal defines a policy .A policy specifies what action to perform for a given state.A policy is formulated asa ¼ ðiÞFigure 2.(a)Mathematical model of a neuron j.(b)A three-layer feedforward neural network of p inputs and q outputs.J.DINERSTEIN ET AL.************************************************************************************************************************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.98Comp.Anim.Virtual Worlds 2004;15:95–108where i is the current state and a is the action to perform.This is a non-context-sensitive formulation,which cov-ers most cognitive models.However,if desired,context information can also be supplied as input (e.g.,the last n actions can be input).We train our feed-forward neural net to approximate a specific policy .We denote theneural net approximation of the policy ^(see Figure 3a).Note that the current state (network input)and action (output)will likely be vector-valued for non-trivial virtual worlds and characters.Further,a logical selec-tion and organization of the input and output compo-nents can help make the target function as smooth as possible (and therefore easier to approximate).Selecting network inputs will be discussed in more detail later.Also note that the input should be normalized and the output denormalized for use.Specifically,the normal-ized input components should have zero means and unit variances,and the normalized output components should have 0.5means and be in the range [0.1,0.9].This ensures that all inputs contribute equivalently,and that the output is in a range the neural net’s activation function can produce.An important question is how many hidden layers (and how many neurons in each of those hidden layers)we need to use in a neural net to achieve a good approximation of a policy.This is important because we want a reasonable approximation,but we also want the neural net to be as fast to execute as possible (i.e.,there is a speed/quality trade-off).We have found that,at minimum,it is best to use one hidden layer with the same number of neurons as there are inputs.If a higher-quality approximation is desired,then it is useful to use two hidden layers,the first with 2p þ1neurons (where p is the number of inputs),and the second with 2q þ1neurons (where q is the number of outputs).We have found that any more layers and/or neurons than this usually provides little benefit.Note that the state and action spaces can be contin-uous or discrete,as all processing in a neural network is real-valued.If discrete outputs are desired,the real-valued outputs of the network should simply be quan-tized to predefined discrete values.Even though cognitive models (i.e.,policies)produce good animations in most cases,there are some cases in which they can appear too predictable.This is due to the fact that cognitive models are fundamentally determi-nistic (mapping states to actions).We now introduce an alternative form of our technique that addresses this problem.First note that,in some cases,it may be interesting to not always perform the same action for a given state (even if that action is most desirable).Occa-sional slight randomness in the decision making of an intelligent virtual character,performed in the right manner,can dramatically improve the aesthetic quality of an animation when predictability cannot be tolerated.However,it is not enough to simply choose actions at random,as this makes the virtual character appear very unintelligent.Instead,we do this in a much more believable fashion with a modification of the structure of our technique (see Figure 3b).We formulate it as a priority function :priority ¼P ði ;a ÞThe priority function represents the value of performing any given action a from the current state i under a policy .The priority can simply be an ordering of the best action to the worst,or can represent actual value in-formation (i.e.,how much an action helps the character reach a goal state).Using a priority function allows us to query for the best action at any given state,but also lets us choose an alternative action if desired (with knowl-edge of that action’s cost).For example,by using the known priorities of all candidate actions from the cur-rent state,we can select an action probabilistically.Thus our virtual character is able to make intelligent,but non-deterministic,decisions for all situations.However,note that while this non-deterministic technique is useful,we focus on standard policies in this paper.This is because they are simpler,faster,and correspond to the standard approach to cognitive modeling (i.e.,always using the best possible action in a given state).T raining the Neural NetworkWe train the neural net using the backpropagation algorithm with examples of the cognitive model’s deci-sions (i.e.,policy).A naive approach is to randomly select many examples of the entire state space.However,this is wasteful because we are usually only interested in a small portion of the state space.This is because,asaFigure 3.(a)Neural net approximation of a policy .Thenetwork input is the current state,the output is the action to perform.T and T À normalize the input and denormalize the output,respectively.(b)Neural net approximation of apriority function.FAST AND LEARNABLE BEHAVIORAL AND COGNITIVE MODELING************************************************************************************************************************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.99Comp.Anim.Virtual Worlds 2004;15:95–108character makes intelligent decisions,it willfind itself traversing into only a subset of all possible states.As an example,consider a sheepdog that is herding a flock of sheep.It is illogical for the dog to become afraid of the sheep and run away.It is equally illogical for the sheep to herd the dog.Therefore,such states should never be experienced in practice.We have found that by ignoring uninteresting states the neural net’s training can focus on more important states,resulting in a higher-quality approximation.However,for the sake of robustness,it may be desirable to also use a few randomly selected states that we never expect to en-counter(to ensure that the neural net has at least seen a coarse sampling of the entire state space).To focus on the subset of the state space of interest,we generate examples by running many animations with the cognitive model.At each iteration of an animation,we have a current state and the action decided upon,which are stored for later use as training examples.We have found that using a large number of examples is best to achieve a well-generalized trained network.Specifically, we prefer to use between5000and20,000examples.Note that this is far more than is normally used when training neural nets,but we found that the use of so many examples helps to ensure that all interesting states are visited at least once(or at least a very similar state is visited).Finally,note that if a small time step is used between actions,it may be desirable to keep only an even subsampling of the examples generated through anima-tion.This is because,with a small time step,it is likely that little state change will occur with each step and therefore temporally adjacent examples may be virtually identical. We used a backpropagation learning rate of ffi0.1 and momentum of ffi0.4in all our experiments.Train-ing a neural net took about15minutes on average using a1.7GHz PC.In all of our experiments,an appropriate selection of inputs to the neural net resulted in a good approximation of a cognitive model.Choosing Salient V ariables and Features Training a neural network is not a conceptually difficult task.All that is required is to supply the backpropaga-tion algorithm with examples of the desired behavior we want the network to exhibit.However,there is one well-known challenge that we need to discuss:selecting network inputs.This is critical as too many inputs can make a neural net computationally infeasible.Also,a poor choice of inputs can be incomplete or may define a mapping that is too rough for a neural net to approx-imate well.General tips for input selection can be found in Haykin,22so we only briefly mention key points and focus our current discussion on lessons we have learned specific to approximation of cognitive models.The inputs should be salient variables(no constants), which have a strong impact in determining the answer of the function.Further,if possible,features should be used.Features are transformations or combinations of state variables.This is useful not only for reducing the total number of inputs but also for making the input–output mapping smoother.Through experience,we have discovered some useful features that we now present. When approximating cognitive models,many of the potential inputs represent raw3D geometry information (position,orientation,etc).We have found that it is very important to make all inputs rotation and translation invariant if possible.Specifically,we have found it very useful to transform all inputs so that they are relative to the local coordinate system of the virtual character.That is,rather than considering the origin to be at somefixed point in space,transform the world such that the origin is with respect to the virtual character.This not only makes it unnecessary to input the character’s current position and orientation,but also makes the mapping smoother.We have also found it useful,in some cases,to separate critical information into distinct inputs.For example,if a cognitive model relies on knowing the direction and distance to an object in its virtual world, this information could be presented as a scaled vector (dx,dy,dz).However,we have found that in many cases it is better to present this information as a normalized vector with distance(x,y,z,d),as the decision-making may be dramatically different depending on the dis-tance.In other words,if a piece of information is very important to the decision-making of a cognitive model, the mapping will likely be more smooth if that infor-mation is presented as a separate input to the neural net.Thus we need to balance the desire to keep the number of inputs low with clearly presenting all salient information.Finally,note that choosing good inputs sometimes requires experimentation to see what choice produces the best trained network,as input selection can be a dif-ficult task.However,recall that if storage is not a con-cern k-nearest neighbor can be used instead of a neural network and(as described in Mitchell21)can automati-cally discover those inputs that are necessary to approx-imate the target function.Several practical examples of selecting good inputs for neural networks to approximate cognitive models are given in the results section of thispaper.J.DINERSTEIN ET AL.************************************************************************************************************************************************************************************************************ Copyright#2004John Wiley&Sons,Ltd.100Comp.Anim.Virtual Worlds2004;15:95–108。
安徽省安庆市桐城市安徽名校2023-2024学年高一下学期5月期中阶段性联考英语试题一、阅读理解Are you looking for inspiration for your next travel destination? Lonely Planet has published its yearly list of top places to go. Whether you’re looking for adventure, culture, or value, there’s something for every type of traveler.Top City: NairobiNairobi, the Kenyan capital, was chosen for its unique music and dance culture, and impressive local food. Nairobi National Park is a must-see, where visitors can see over 400 species wandering the open grass plains while high buildings twinkle on the horizon.Top Country: Mongolia Mongolia is loved for its wide-open spaces, adventure activities, and unique food. The Mongolian government has announced 2023 through 2025as the “Years to Visit Mongolia,” easing entry conditions for international travelers and improving public facilities, including a new $650 million airport.Top Sustainable (可持续发展的) Travel Destination: SpainThe publisher also recommended Spain for the country’s efforts to expand renewable energy, to develop off-season travel, and to bring tourism to new destinations, such as the port city of Valencia, easing pressures on hot spots such as Barcelona.Top Best-Value Location: The American MidwestThe American Midwest, including the cities of Chicago and Detroit, is also on the list, as it offers a high-quality travel experience at a reasonable cost. Visitors can enjoy excellent services and facilities without having to spend too much money.Top Beach Destination: Donegal Donegal, Ireland’s most northwestern county, has the country’s longest coastline and over 100 beaches, where visitors can enjoy the highest sea cliffs in Europe.1.What do Nairobi and Mongolia have in common?A.Tasty food.B.Diverse culture.C.Breathtaking views.D.Wide-open Spaces. 2.Why is Spain on the list?A.It improves public facilities for international travelers.B.It provides sustainable travel destinations for visitors.C.It has the world’s longest coastline and plenty of beaches.D.It offers a high-quality travel experience at a reasonable cost.3.In which section of a newspaper can you find this text?A.Architecture.B.Environment.C.Tourism.D.Publishing.The topic of China’s “slash youth” has fueled heated debate on social media recently, drawing attention to the diverse pursuits of the younger generation.The s lash youth, which could also be called slashers or slash-generation — means those who refuse to be defined or bound by just one personal identity. They are keen to present themselves as multiple and sometimes different identities, such as a nurse and model, a teacher and stand-up comedian, and an engineer and band player. Rather than material comforts, they pursue meaningful achievement.A diverse career can enable them to develop new experiences and talents as well as social and professional networks, leading to greater flexibility in life and work, recognition and satisfaction.A study on “slash youth” published in the China Youth Research magazine analyzes the background of this phenomenon: with part of modern society’s structured organization and stability norms (规范) broken, flexible labor markets and structural unemployment has appeared, removing the sense of job security of the young people in employment. Young people also face the problem of self-realization, including the sense of powerlessness, lost sense of value and lack of self-identity in the profession, which forces them to re-find the meaning of work.“The ‘slash life’ shows that our society is becoming more and more diversified and inclusive, and it welcomes everyone’s self-fulfillment,” said Shi Yanrong, an associate researcher from Tianjin Academy of Social Sciences. “Young people no longer have to rely on work and money for their sense of self-worth. They tend to create their own identities.” The researcher added that against the background that people have multiple interests and are willing to pay for these interests nowadays, the “slash life” of the youth would drive the growth of a new economy, just as animation culture affected the market.4.What do we know about the slash youth?A.They tend to undertake multiple careers.B.They are keen to pursue material comforts.C.They enjoy posting themselves on social media.D.They can easily adjust themselves to challenges in life.5.What is paragraph 4 mainly about?A.The importance of flexible labor markets.B.Reasons for the appearance of slash youth.C.The pursuit of the youth in their profession.D.Causes of losing self-identity among the youth.6.What does Shi Yanrong think of the slash life?A.It boosts the rise of a new economy.B.It helps increase people’s independence.C.It inspires young people’s creativity at work.D.It enables young people to earn high incomes.7.What is the purpose of the text?A.To provide career guidance.B.To analyze a cultural difference.C.To introduce a social phenomenon.D.To share a unique life experience.Ignoring message is frowned upon (皱眉) in this always-on times. And according to most people, dropping out of communication is blamed as “ghosting”, which in the years since the term became widespread, has become a deadly killer of digital communications.But in these exhausting times, when so many are overburdened with family responsibilities, stress, grief and anxiety, perhaps we should let go of the outdated, demanding requirement to participate in endless back-and-forth conversations. As Daniel Post Senning of the Emily Post Institution said, “You have to be a civil and decent person, but you don’t have to give your time and attention to everyone who asks for it.”In an era when we understand more and more the importance of rest and time away from screens, triaging (优先选择) can be necessary for our peace of mind and relationships with the people in our lives: When we are on vacation. When it’s after 7 pm. When we are at the dinner table. When we are exercising. If you don’t reply immediately to a message during one of these times, don’t apologize. Just reply when you can. Or don’t.Is triaging a blanket permission to fall into unexplained silence or become a bottleneck to your colleagues’ work? No. Instead, establish boundaries with your work colleagues frankly so that they don’t panic when they can’t reach you.It’s also worth asking yourself who is putting the pressure on for immediate response. Perhaps it’s not the message sender, but you.8.Which statement do most people agree with?A.Giving full attention to whoever asks for it is a must.B.Dropping out of a conversation equals killing other people.C.It’s considered improper to cut off from an ongoing conversation.D.It’s highly advised to carry on conversations despite heavy burdens.9.How does the author develop paragraph 3?A.By listing figures.B.By giving examples.C.By comparing facts.D.By stating reasons.10.What does the underlined word “bottleneck” in paragraph 4 mean?A.Barrier.B.Milestone.C.Opportunity.D.Masterpiece. 11.What can be a suitable title for the text?A.Important Emails: Send Them Quickly B.Urgent Calls: Answer Them Immediately C.Unfriendly People: Avoid Them Carefully D.Unnecessary Messages: Ignore Them DirectlyArtificial intelligence (AI) is showing promise in earthquake prediction, challenging the long-held belief that it is impossible. Researchers at the University of Texas, Austin, have developed an AI algorithm (算法) that correctly predicted 70% of earthquakes a week in advance during a trial in China and provided accurate strength calculations for the predicted earthquakes.The research team believes their method succeeded because they stuck with a relatively simple machine learning approach. The AI was provided with a set of statistical features based on the team’s knowledge of earthquake physics, and then instructed to train itself using a five-year database of earthquake recordings. Once trained, the AI provided its prediction by listening for signs of incoming earthquakes within the background rumblings (隆隆声) in the Earth.This work is clearly a milestone in research for AI-driven earthquake prediction. “You don’t see earth-quakes coming,” explains Alexandros Savvaidis, a senior research scientist who leads the Texas Seismological Network Program (TexNet). “It’s a matter of milliseconds, and the only thing you can control is how prepared you are. Even with the 70% accuracy, that’s a huge resultand could help minimize economic and human losses and has the potential to remarkably improve earthquake preparation worldwide.”While it is unknown whether the same approach will work at other locations, the researchers are confident that their AI algorithm could produce more accurate predictions if used in areas with reliable earthquake tracking networks. The next step is to test artificial intelligence in Texas, since UT’s Bureau TexNet has 300 earth-quake stations and over six years worth of continuous records, making it an ideal location for these purposes.Eventually, the authors hope to combine the system with physics-based models. This strategy could prove especially important where data is poor or lacking. “That may be a long way off, but many advances such as this one, taken together, are what moves science forward,” concludes Scott Tinker, the bureau’s director.12.How does the AI forecast earthquakes?A.By identifying data from the satellites.B.By analyzing background sounds in the Earth.C.By modeling data based on earthquake recordings.D.By monitoring changes in the Earth’s magnetic field.13.What does Alexandros Savvaidis intend to show in paragraph 3?A.The ways to reduce losses in earthquakes.B.The importance of preparing for earthquakes.C.The significance of developing the AI prediction.D.The limitation of AI algorithms in earthquake prediction.14.What will the follow-up research focus on?A.Conducting tests in different locations.B.Applying the AI approach to other fields.C.Building more earthquake stations in Texas.D.Enlarging the database to train the calculation accuracy.15.Which words can best describe the earthquake-predicting technology?A.Stable but outdated.B.Effective but costly.C.Potential and economical.D.Pioneering and promising.Money habits are the small daily decisions we make that influence how we spend and save our money. It might be challenging to change these habits, but with the following tips, we can make improvements to achieve our financial objectives.Taking a careful look at your existing spending habits is the first step. 16 It can make you realize the potential issues on your spending habits and identify areas where you can adjust your spending patterns.17 Having a specific objective in mind will help you keep motivated and focused, whether it’s saving for a down payment (首付) on a home or creating an emergency fund.Setting and sticking to a budget (预算) is also crucial. A budget ensures that your spending is focused on your urgencies and that you set aside enough cash to achieve your objectives.18Impulsive purchasing (冲动消费) is a bad habit to break. Kick the habit of impulse shopping because it can significantly reduce your bank account. Make a list of the items you require before going shopping and stick to it. 19Finally, pay attention to your financial habits and adjust them as needed. 20 Keep in mind that adjusting your spending patterns is. a process, and its effects might not be seen right away. But in the long run, you can make positive changes and strengthen your overall financial health.A.For many of us, spending comes naturally.B.Next, set up clear financial objectives for yourself.C.If you are on a budget, you do not have much money to spend.D.Track your spending and goals, and be prepared to adjust as required.E.This involves recording all your spending for a month and then analyzing the data.F.Make sure to include a category for savings in your budget and use it consistently. G.Additionally, wait a day or two before buying something so that you can decide if you still want it.二、完形填空The summer when I was ten, my mother decided to bring us to the world of art. My brotherand I were not very 21 when we realized that we would have to spend one afternoon a week at the museum. That was not so bad — what really 22 me was that before each visit, she made us 23 artists and painting styles. That felt almost as 24 as being in school. 25 , who wants to spend summer thinking about 26 when you could be playing at the swimming pool?First, we have to read about ancient Egyptians and their 27 way of painting, and then look at them at the museum. Unusual style of art always 28 my brother but not me. Later we had to look at pictures of fat babies with wings and curly hair and with no clothes on 29 around the edge of the paintings. 30 those pictures, I still couldn’t see what was so great about art. However, on our last visit to the museum, things 31 —I saw a painting in which a woman was reading to a child. The colors were soft and 32 , and it could tell by the mother’s 33 how happy she was just to be with the child. I could 34 take my eyes away from the painting. It was really 35 looking at so many paintings to find one that could interest me so much.21.A.surprised B.confused C.tired D.excited 22.A.troubled B.amazed C.satisfied D.relieved 23.A.give away B.read about C.see to D.depend on 24.A.sweet B.bad C.funny D.nervous 25.A.Therefore B.Meanwhile C.Anyway D.Instead 26.A.teachers B.parents C.artists D.coaches 27.A.smart B.strange C.careful D.familiar 28.A.attracted B.comforted C.failed D.survived 29.A.flying B.wandering C.swimming D.running 30.A.Struggling with B.Reflecting on C.Cutting throughD.Staring at31.A.appeared B.changed C.mattered D.followed 32.A.gentle B.bright C.dull D.faint 33.A.behavior B.appearance C.language D.expression 34.A.hesitantly B.slightly C.hardly D.merely 35.A.effortless B.useless C.worth D.painful三、语法填空阅读下面短文,在空白处填入1个适当的单词或括号内单词的正确形式。
motionMotion: Understanding the Basics and ApplicationsIntroductionMotion is a fundamental concept in various fields, including physics, engineering, and computer science. At its essence, motion refers to the movement of an object or a system of objects in relation to a frame of reference. This document aims to provide an overview of the basic principles of motion and explore its applications in different domains.I. Newtonian Mechanics and MotionMotion, as understood in classical physics, is primarily governed by the laws of motion formulated by Sir Isaac Newton. These laws describe how the motion of an object changes in response to forces acting upon it. The three laws of motion can be summarized as follows:1. Law of Inertia: An object at rest tends to stay at rest, and an object in motion tends to stay in motion with the same speed and direction unless acted upon by an external force.2. Law of Acceleration: The acceleration of an object is directly proportional to the net force exerted on it and inversely proportional to its mass.3. Law of Action and Reaction: For every action, there is an equal and opposite reaction.Understanding these laws is crucial for predicting and analyzing the motion of objects in a wide range of scenarios, from everyday movements to celestial mechanics.II. Types of MotionMotion can be classified into various types based on different criteria. Some common types of motion include:1. Linear Motion: This type of motion refers to the movement of an object along a straight line. Examples include themotion of a car on a straight road or a ball rolling down a slope.2. Circular Motion: Circular motion involves the movement of an object along a circular path. A simple example is the motion of a satellite orbiting the Earth.3. Oscillatory Motion: Oscillatory motion is characterized by repeated back-and-forth or to-and-fro motion about a fixed point or equilibrium position. Simple pendulums and springs undergoing compression and expansion are examples of oscillatory motion.4. Rotational Motion: Rotational motion occurs when an object spins or rotates about a fixed axis. The spinning of a top or a spinning wheel are examples of rotational motion.5. Projectile Motion: Projectile motion describes the motion of an object launched into the air at an angle, following a curved trajectory. A perfect example is the motion of a thrown ball.III. Applications of MotionUnderstanding motion has significant practical applications across multiple fields. Some notable applications include:1. Robotics: Motion planning and control are crucial in the field of robotics. Robots need to precisely navigate their surroundings and perform tasks, such as assembly or movement, based on predefined algorithms.2. Animation and Gaming: The realistic animation of characters and objects in movies, video games, and virtual reality environments heavily relies on understanding and replicating natural motion. Accurate physics-based simulations are key to providing immersive virtual experiences.3. Transportation: The study of motion plays a vital role in the design, analysis, and optimization of transportation systems. Understanding the principles of motion helps engineers develop efficient vehicles and transportation networks, leading to improved safety and reduced energy consumption.4. Sports Science: Analyzing and enhancing human motion in sports is essential for improving performance, preventing injuries, and optimizing training programs. Motion capture technology allows coaches and athletes to analyzebiomechanics and technique, leading to better results on the field.ConclusionMotion is a fundamental concept across various disciplines and has a wide range of applications. Understanding the principles of motion, as described by Newtonian mechanics, enables us to predict and manipulate the movements of objects and systems. From robotics to animation, transportation to sports science, motion plays a vital role in advancing technology and improving our understanding of the world around us.。
A navigation graph for real-time crowd animation on multilayered and uneven terrain Julien Pettr´e1,Jean-Paul Laumond2and Daniel Thalmann1EPFL-VRlab1CH-1015Lausanne,SwitzerlandTelephone:+41216935217,Fax:+41216935328email:julien.pettre@epfl.ch,daniel.thalmann@epfl.chRIA-LAAS/CNRS27,avenue du Colonel Roche,31077Toulouse cedex4,FranceTelephone:+33561336347,Fax:+33561336455email:jpl@laas.frAbstractThis paper addresses real-time navigation for crowds moving on multilayered and uneven terrains.The objective is to animate thousands of characters in real-time on large-scale environments without taking care of motion details:a common goal being given to a homogeneous crowd,the proposed method automatically breaks the homogeneity of the crowd by spreading characters on various routes reaching the goal;the animation effect is displayed at a view point that does not require to address character interaction locally.Our approach is based on a space structuring technique that automatically decomposes multilayered or uneven terrains into navigation corridors giving rise to a navigation graph supporting the search.Keywordsreal-time crowds,path planning,navigation,navigation graph,uneven,multi-floored,multilayered terrain1IntroductionComputers power increase recently allowed to populate interactive virtual worlds with numerous inhabitants. Crowds are now common in movies and more and more in video games,especially in real-time strategy games. Simulation of real-time virtual crowds is still a difficult challenge given that available computation-time is mainly dedicated to rendering;a need for fast simulation techniques exists.In this paper,we focus on the navigation problem.Recent approaches[SKG05,LK05]considered the navigation problem as a pure motion planning one,based respectively on Probabilistic Roadmaps[KSLO96]and on a A∗-search planner.In these approaches,a locomotion trajectory is calculated successively for each character,that is then considered as a moving obstacle for the resting planning operations(pedestrians inter-collision avoidance is thus guaranteed).These solutions are efficient and produce high-quality motions.However,they are limited by the possible size of the crowd.Indeed,planning loop is invoked for each pedestrians and the search space becomes more and more constrained given that pedestrians become progressively moving obstacles(once their own motion is planned).Our approach differs:in our point of view,pedestrians inter-collision is a local problem that should be solved only in the user’s focus area,but elsewhere,people should be scattered in the environment to get believable crowd motion.How to get a variety of paths with similar initial andfinal conditions?Previous approaches used mainly interactions between crowd characters to result in behaviors variousness.In [BG96]particle systems are used to simulate crowds.People are controlled by high-level directives while theirlocal behaviors(obstacles and people avoidance)result from a sum of attractive or repulsive forces analogous to physical electric ones.In[HFV00]Helbing also introduces a solution based on Physics-laws.Reynolds in[Rey87] demonstrated that a simple perception model of the rounding world and a small set of local rules(match velocity, avoid collisions,move towards the center of theflock)were efficient to simulate aggregate motions of aflock of boids in a believable way.Hodgins et al.also use a perception model as inputs of the behavioral model presented in[BH97].More elaborated behavioral models allow to simulate crowds of passengers in railway stations like described in[ST05],and more generally,pedestrians in virtual cities(note that semantic environments are required).Also, complex behavioral models are time-consuming and the possible number of people composing crowds is limited. In[UT02],Ulicny proposes a scalable behavioral model to palliate this limitation.From the geometric description of the environment,Lamarche[LD04]proposes an automatic extraction of the topology,that is stored into a graph answering to navigation queries.The method is limited toflat terrains.Our objectives are close to the ones of this last approach.From the geometric definition of any environment,we capture automatically its topology into a graph structure.The navigation graphs solve user’s queries interactively such as:selecting characters and affecting new goals,creating peopleflows between pairs of locations,or creating sets of attractive areas in-between which people navigate.Our contribution is to address at the same time:-large-scale crowd navigation made of thousands of characters,-unstructured environments including multilayered and uneven terrains,-real-time computation for interactivity,The method is based on an automatic space structuring introduced in Section2and giving rise to a navigation graph.The navigation graph is then used for path planning:the originality of the approach is to compute a set of non homotopic itineraries that supports character spreading(Section3).Results and performances of the method are presented in Section4,before Discussion and Conclusion.2A space structuring based navigation graphThe general idea of our approach may be introduced from the case offlat terrain navigation.In such a case the terrain is structured into obstacles to be avoided and free-space.Wefirst compute the V orono¨ıdiagram of the free-space.Then we build a set of collision-free convex cells along the V orono¨ıdiagram.The adjacency graph of the cells defines the so-called navigation graph.The idea is not new.Various algorithms exist from this simple case[Lat91].Here the challenge is to extend the idea to multilayered and uneven terrains.In such cases terrains are no more obviously decomposed into obstacles and free-space.Some parts of the terrains should be classified into obstacles or free-space according to the slope angle.We should then use dedicated techniques for space structuring.That problem has been already addressed in the framework of motion planning for robots moving on rough terrains[HST99,Che99].The space structuring method we use is inspired of such works.It consists infirst computing an elevation map from the geometric data structures modelling the terrain.The elevation map consists in a3D grid whose points belong to the terrain.Two adjacent points within the grid are connected if the slope of the associated segment(with respect to an horizontal plane)is less than some user-defined threshold.Such a connecting rule gives rise to a3D navigation grid(Section 2.1).Then a distance grid is computed as well as the associated medial axis(Section2.2).Finally navigation corridors are computed along the medial axis and they are structured into a navigation graph(Section2.3).2.1Elevation map and Navigation gridThe elevation map is a3D grid whose points refer to positions on the considered terrain where virtual humans are able to stand(i.e.having enough free space above).The map is computed from:the environment mesh,the maximum characters’height H c and a user defined precision(corresponding to the grid unit).The key-idea is to use environment views from above and the OpenGL Z-Buffer to retrieve all the possible elevations of the terrain for each horizontal position.Then,wefilter the elevation map to erase points where humans are not able to stand. Finally,a navigation grid is deduced by checking connections between adjacent elevation map points.The grid precision is interactively defined without need for expertise:to do so,users resize the view-port of an above-view of the environment.The ideal view-port size is the smallest one allowing to display all the environment details such as narrow passages or small obstacles.The resulting view-port size is the one used for next operations.OpengGL is initialized in an orthographic projection mode,environment being seen from above.Two horizon-tal clipping planes are also initialized,oppositely oriented and separated with a distance H c.These planes allow to cut the environment into slices whose height is H c,and to display them successively by setting the planes altitude correctly.After each slice display,drawn pixels coordinates(screen coord.and depth)are transformed into world coordinates,and stored as new elevation map points.The resulting elevation map identifies all the possible altitudes of the terrain for any horizontal position,except some.Indeed,some areas may be occluded in slices displays.As slices height is H c,occlusion implies that the free space above occluded areas is insufficient to allow characters to stand:occluded pixels are thus understandably ignored.However,we still have tofilter superposed elevation map points having a too small vertical distance separating them at the end of the process(checking is required between points belonging to different slices).We now check for connections between elevation map points.The elevation map points and their connectivity compose our navigation grid.Two elevation map points are connected if:-adjacent(adjacency is limited to the four neighbors along the two main horizontal axis),-the declivity between the points is limited by a user defined threshold,-no vertical obstacle stands between the points.Checking for vertical obstacles is required because the elevation map has been computed from above-views of the environment where vertical faces of objects are invisible.To do so,we use again orthographic views of vertical slices(whose depth is the grid unit)of the environment seen from the sides(e.g.left and front views).If side-views do not reveal presence of obstacle and if the declivity criteria is respected,adjacent grid-points get connected.As a result,each point may have a maximum of4connections(one for each of its4neighbors). Points have less than4connections when close to obstacles or presenting high declivities.Such points compose the border of the navigation grid and model implicitly the non-navigable areas.From this observation,we deduce automatically distance grid(Section below)and next,the navigation graph.2.2Distance to border points and medial axisOur next goal is to evaluate the clearance around each point of the previously computed navigation grid.To do so, we associate a distance grid to the navigation grid.The distance grid provides the distance between any navigation grid point and the nearest border grid point.Considered point and related border point should belong to the same layer.Let’s take the example of a road passing under a bridge.When a point belonging to the road is considered, it’s clearance is the distance to the road border and the bridge should not be taken into account(and reversely).To compute the distance grid,we successively start from each border point and propagate to connected points. Turn-backs are forbidden,so that the propagation does not move to superposed layers.Distance between visited points and the current origin border point is calculated.Visited points store the minimal distance calculated(by comparison with successive propagations from other border points).Note that in order to increase the process efficiency,the propagation is stopped where the visited point has already a smaller minimal distance to another border point.Finally we identify distance grid points belonging to medial axis.Medial axis are interesting in our case because they correspond to itineraries providing a maximum clearance to borders(and implicitly to obstacles and non-admissible slopes).Medial axis can be compared to the V orono¨ıcell borders,however slightly different in our case due to the multilayered structure of surfaces.2.3Navigation GraphFrom the previous distance grid,we now deduce our navigation graph.The computed medial-axis and clearance allow to deduce a set of connected areas,which are obstacle-free andflat enough for navigation.From each navigation grid point and the associated clearance,we can guarantee navigability into a cylindric area,whose height is H c,and whose ray equals the clearance,and whose caps follow the terrain.Such a modeled area are our navigation graph vertices Figure1,left image.Now let us consider pairs of overlapping cylindric areas.Any point belonging to thefirst area can be joined to any other point belonging to the second area following a path passing by any point lying in their intersection. Consequently,navigation graph edges connect pairs of vertices that correspond to overlapping cylindric areas (Figure1,right image).Geometrically,edges are represented as vertical rectangular gates.To deduce gates coordinates,we compute the intersection points of2circles(vertical projection of the cylinders on an horizontal plane).The edge cost is the distance between their center.Figure1:Representation of the Navigation Graph Vertices and Edges As mentioned in previous Section,points belonging to medial-axis points are the most interesting to deduce graph vertices.Indeed,they all have locally a maximum clearance to borders.However,considering the totality of these points results in dense and redundant navigation graphs(lowering efficiency for solving navigation queries).Figure2:Vertices placement results from a tradeoff between graph complexity and coverage quality.From left to right:lowest complexity,best coverage,and chosen tradeoffAs illustrated on the bottom images of Figure2,the subset of points from which graph vertices are deduced results from a tradeoff between graph complexity and coverage quality(the chosen tradeoff is illustrated on the right image).We use the following algorithm to extract automatically a subset of grid-points from which navigation graph vertices are built:1.select the distance grid point having maximum clearance(this point implicitly belong to medial-axis),2.create a vertex from the selected point,3.disable points englobed by the resulting cylinder,4.choose the next available medial-axis point having the maximum clearance,and loop to step2.Using the property enounced above in this section,people can navigate from gate to gate by with respect to the graph structure,i.e.by following contiguous set of edges.Thus,gates delimit navigable corridors in the environment.Next section describes how navigation graphs,gates and corridors are used to answer to user’s queries with a variety of solutions.3Planning for crowds based on navigation graphsPath planning based on navigation graphs is reduced to a graph search problem.The solution for navigating from an area to another is captured by the graph(if existing)and can be found using any graph search algorithm.The resulting solution path,that is a succession of edges to follow,can be transformed into an itinerary by selecting way points in the resulting corridors.However,our goal is to bring solution variety to the path planning problem, so that people sharing the same goal get spread on individual itineraries.Solution variety is obtained at two distinct levels:first,a specific graph search method provides several solution paths belonging to different homotopy classes(Section3.1).Then,each of these paths can be continuously declined in different sets of way-points(Section3.2),corresponding to different itineraries belonging to a same homotopy class.Roughly speaking,in our case,two itineraries belong to the same homotopy class if they can be continuously transformed one into another without crossing obstacles or high declivities(transformation space is limited toFigure3:The solution paths variety is obtained in2steps:first,non-homotopic paths are found,second,way-points may be selected anywhere in the resulting corridorsnavigable surfaces).For example,in a virtual city,itineraries in identical streets belong to the same homotopy class,and at the opposite,if a building separates them,they belong to different homotopy classes.The two next Sections describefirst,how non-homotopic paths are found,and second,how each is declined into homotopic itineraries.3.1Dijkstra based path searchGiven an initial and afinal area to join,our objective is to retrieve several solution paths from the navigation graph. To do so,we use an iterative Dijkstra-based path search technique.The principle is the following:first,the shortest path search is found.Second,some of the edges composing the solution path are(temporarily)removed from the graph.Finally,the processus is reiterated and stops when the initial and thefinal vertices get disconnected.Parameters control the edge removal stage and allow to drive the process.We experimentally observed that: -playing with the percentage of edges to remove controls the variousness between to successive paths found (intuitive parameter),-playing with the percentage of edges to keep at path extremities avoid quick disconnections and influences the total number of paths found,-furthering removal of edges linking vertices having few connections also preserve connectivity,-furthering removal of edges linking vertices having numerous connections increases path spread,but reduces the number of paths found.As mentioned before,each path among the several ones found is a succession of edges,i.e.a succession of rectangular gates to cross,that delimit corridors as illustrated on Figure3.As obstacles separate each set of corri-dors,paths belong to different homotopy classes.Now,the corridors width is exploited to compute individualized way-points sets for each pedestrian.3.2Individualized way-pointsFor each character,one of the previously computed path is selected and transformed into a way-points sequence. As many way-points as the number of gates to cross are generated.An individual parameter p ranging from0to1 determines whether a character crosses gates on the left(p∈[0,0.33]),on the middle(p∈[0.33,0.66])or on the right(p∈[0.66,1]).Doing so,we avoid homotopic itineraries intercrosses.Consequently,we get a continuous declination of paths into individualized way-points sequences.For a given path,this declination generates itineraries belonging to a unique homotopy class(because no obstacle separates them).An example of such a declination is illustrated on the right image of Figure3.Steering method allowing to join successive way-points is not a main issue here,however,let us mention that we use a set of different methods to ensure scalability.Depending on the distance between the virtual character and the user’s point of view,we synthesize linear paths or smooth paths[Bou05].We do not synthesize paths for invisible characters,but maintain a progression parameter to ensure crowd coherency.Each character parameterize steering methods with an individual set of values(e.g.maximum locomotion velocity).Next Section illustrates some navigation examples.4ResultsFigure4:Considered environment(top-left),resulting navigation and distance grids(top-right),and deduced navi-gation graph(bottom)We tested our method on the environment introduced by Figure4,top-left image.Environment size is50×50units and is composed of an uneven surface,7trees,wooden barriers,2bent bridges over a small canyon,and a planar footbridge linked to the ground by a spiral stair and a L-stair.The top-right image of Figure4displays the computed navigation grid.We used a0.2unit precision and a 2units maximum character’s height(H c)in this example.Navigation grid identifies the too sloping zones:in the canyon or on the sides of the hill,one can observe that grid points have missing connections to neighbors.Also, under bridges or stairs,where pedestrians cannot stand,no point has been inserted into the grid.Distance to the nearest border point is color-coded:blue is for small distances whereas yellow points ar farer to obstacles and slopes.The two bottom images of Figure4represent vertices(left)and edges(right)of the deduced navigation graph. Vertices are represented as cylinders whereas edges are represented as vertical gates as explained before in Section 2.3.The graph is made of474vertices and582edges,distributed in several connected components.The largest component covers correctly the area where pedestrians are expected to walk.The navigation graph is computed from the environment in a minute on a common Pentium IV1.5GHz,1Mo RAM desktop station.Figure5illustrates two examples of navigation plans.In thefirst example,a crowd is asked to go from the left side of the environment to the planar footbridge.In the second example,the crowd moves down from the footbridge up to the top of the hill.Top-left image of Figure5illustrates the solution paths to thefirst problem:different successions of gates and corridors leading to the goal are displayed.Several solutions are available:first,pedestrian may choose between one of the two possible bridges to move towards the center of the environment.Then,pedestrians may use the L-stair as well as the spiral stair to get upon the footbridge.The multilayered structure of the spiral stairs(parts of the solution path are superposed)is perfectly handled.Figure 5:Navigation resultsPedestrians are allowed to walk anywhere inside corridors.On the top-right image,we can see a 100pedes-trians crowd following the navigation plan.Each pedestrian individually chooses one of the available paths,and transforms it into a way-points sequence.As a result,the crowd get spread both on paths and inside corridors.The second navigation plan (Figure 5,bottom-left)illustrates the solution ability to handle uneven terrains.The declivity of the right side of the hill is too high and people have to skirt round to get on the top.To get there,pedestrians may choose to go down from the footbridge by one of the 2available stairs,and to pass on the left or right side of the barrier nearby the hill.In these examples,the path search computation time is 3ms per navigation query,and paths transformation into way-points sequences consumes 0.36ms per character.The path search complexity is O (P ElogV ),where P is the number of non-homotopic paths found,E the number of edges and V the number of vertices.Transformation of paths into way-points is linear with the number of characters in the crowd and the number of edges composing solution paths.The memory usage is low (the graph occupies 1MB in our case,whereas the whole data structure,i.e.crowd agents,environment geometry,paths,way-points,etc...occupies 10MB ).Figure 6:Navigation in a virtual cityTechnically,we developed a stand-alone program for computing navigation graphs,whereas the navigationfunctions are gathered in a C-library.As a result,our navigation solution can be invoked by other software plat-forms.Figure6illustrates our solution operating in VHD++,a virtual reality software platform[PPM+03].A crowd is moving in a town where several attractive places are defined:a railway station,an hotel,a circus,etc... Pedestrians move randomly between attractive areas.5DiscussionCompared to previous approaches,the main originality of our solution is the absence of interactions between pedestrians to drive their navigation.Collision detection and avoidance(between pedestrians)was probably the main bottleneck in crowds simulation.However,this stage was necessary to obtain behaviors variety and to get be-lievable motions,as mentioned in Introduction.We started from the observation that solving collisions is required by some applications,such as safety-related ones,but is not a matter for many others(such as entertainment).Our solution provides eye-believable motions without interaction,and we observed that collisions occurring in our simulations are undetectable when far from the point of view.We agree that collisions should be solved locally, in the spectator’s focus area,for a reduced number of pedestrians.Elsewhere,in the background,the aspect of the crowd motion is what the spectator expects to see.Populating virtual places with large-size crowds now becomes feasible using our solution and above all,crowds are interactive and controllable.Navigation graph structures the environment but also the crowd.Indeed,each pedestrian knows in which graph vertex he is actually moving,and reversely each vertex references which pedestrians are actually navigating in the corresponding area.These relationships between environment and crowd provided by the graph are yet unexploited in our simulations,but work is in progress.For example,Figure5displays only a100pedestrians crowd moving along the planned paths,so that the people spread remains visible onfigures.Results are still believable with a1000pedestrians crowd,and solution remains interactive for tenth of thousands,but pedestrians continue to move ignoring each others in areas that become overcrowded.To palliate this limitation,instantaneous local population density can be computed efficiently for each area.Local population density then influences pedestrians’velocity and distribution on paths.First results are promising.Finally,future works directions concern simulation scalability.As mentioned in Section3.2,the steering method for reaching way-points is already scalable.Each stage of our solution should also scalable.Depending on the visibility of the considered pedestrians,navigation queries should be solved with a different accuracy.This should allow us to address larger and more complex environment,as well as larger crowds.6ConclusionWe presented a versatile,robust and efficient solution for real-time crowds navigation in environments combining uneven and multilayered surfaces.Our solution provides variety of solutions to the path planning problem so that each character can reach goals in a unique way,and crowds are spread on the whole environment while moving. The solutionfits many applications,especially for the entertainment industry where absence of collisions check is not necessarily a matter.Firsts results are promising,and we aim at using the developed method as a basis for a more elaborated crowd simulation architecture.Our future directions arefirst to equip the method with inter-collisions avoidance tech-nique,and to exploit available implicit information to implement smart navigation strategies,to simulate formation and aggregation of peopleflows,and to influence individual behaviors.References[BG96] E.Bouvier and P.Guilloteau.Crowd simulation in immersive space management.In Proc.of the Eurographics workshop on Virtual environments and scientific visualization’96,pages104–110,1996.[BH97] D.Brogan and J.Hodgins.Group behaviors for systems with significant dynamics.In Autonomous Robots, volume4,pages137–153,1997.[Bou05]R.Boulic.Proactive steering toward oriented targets.Eurographics Short Papers,2005.[Che99]M.Cherif.Motion planning for all-terrain vehicles:a physical modeling approach for coping with dynamic and contact interaction constaints.IEEE transactions on Robotics and Automation(ICRA),1999.[HFV00] D.Helbing,I.Farkas,and T.Vicsek.Simulating dynamical features of escape panic.Nature,407:487–490,2000.[HST99] A.Hait,T.Sim´e on,and M.Ta¨ıx.Algorithms for rough terrain trajectory planning.Advanced Robotics,14(6), 1999.[KSLO96]L.E.Kavraki,P.Svestka,tombe,and M.H.Overmars.Probabilistic roadmaps for path planning in high-dimensional configuration spaces.IEEE Tr.on Robotics and Automation,12(4):566–580,1996.[Lat91]tombe.Robot Motion Planning.Kluwer Academic Publishers,1991.[LD04] marche and S.Donikian.Crowds of virtual humans:a new approach for real time navigation in complex and structured puter Graphics Forum,2004.[LK05]u and J.Kuffner.Behavior planning for character animation.In ACM SIGGRAPH/EUROGRAPHICS Symposium on Computer Animation(SCA’05),2005.[PPM+03]M.Ponder,G.Papagiannakis,T.Molet,N.Magnenat-Thalmann,and D.Thalmann.Vhd++development frame-work:Towards extendible,component based vr/ar simulation engine featuring advanced virtual character upter Graphics International(CGI),2003.[Rey87] C.W.Reynolds.Flocks,herds,and schools:A distributed behavioral model.ACM Computer Graphics,21(4):25–34,jul1987.[SKG05]M.Sung,L.Kovar,and M.Gleicher.Fast and accurate goal-directed motion synthesis for crowds.In ACM SIGGRAPH/EUROGRAPHICS Symposium on Computer Animation(SCA’05),2005.[ST05]W.Shao and D.Terzopulos.Autonomous pedestrians.In ACM SIGGRAPH/EUROGRAPHICS Symposium on Computer Animation(SCA’05),2005.[UT02] B.Ulicny and D.Thalmann.Towards interactive real-time crowd behavior puter Graphics Forum, 21(4):767–775,dec2002.。
关于动画制作的英语作文Title: Exploring the Art of Animation Production。
Animation production is a fascinating blend of creativity, technology, and storytelling. It's a mediumthat has evolved significantly over the years, from traditional hand-drawn animation to computer-generated imagery (CGI) and beyond. In this essay, we'll delve into the various aspects of animation production, exploring its techniques, challenges, and significance in today's entertainment industry.To begin with, let's discuss the different types of animation techniques commonly used in production. Traditional animation, also known as cel animation, involves hand-drawing each frame to create the illusion of movement. This method, while time-consuming, has a unique charm and has been the foundation of animation for decades. On the other hand, CGI animation utilizes computer software to generate lifelike images and animations. This techniquehas revolutionized the industry, allowing for more complex visuals and seamless integration of special effects.Regardless of the technique used, the animation production process typically follows a similar workflow. It starts with pre-production, where the concept, storyboards, and character designs are developed. This phase is crucial as it lays the foundation for the entire project. Next comes production, where the actual animation is created based on the storyboard. This involves drawing or modeling the characters and backgrounds, as well as animating their movements. Finally, in post-production, sound effects, music, and voiceovers are added to bring the animation to life.One of the biggest challenges in animation productionis achieving realism and fluidity of movement. Whether it's a character walking, running, or expressing emotions, animators must pay close attention to details such as timing, spacing, and weight to make the animation believable. This requires a combination of artistic skill and technical knowledge, as well as a keen understanding ofanatomy and physics.Another challenge is staying true to the vision of the director while also meeting the demands of the client or studio. Animation production is often a collaborativeeffort involving artists, animators, writers, and directors, each with their own creative input. Balancing artistic integrity with commercial interests can be tricky, but it's essential for the success of the project.Despite the challenges, animation production is a rewarding endeavor with significant cultural and economic impact. Animated films and series have the power to entertain, educate, and inspire audiences of all ages. They transcend language and cultural barriers, making them a universal form of storytelling. Moreover, the animation industry generates billions of dollars in revenue each year, creating jobs and driving innovation in technology.In conclusion, animation production is a multifaceted process that combines artistry, technology, andstorytelling. From traditional hand-drawn animation tocutting-edge CGI, animators continue to push the boundaries of what's possible, creating immersive worlds and memorable characters. Despite its challenges, animation remains a vibrant and influential medium with a bright future ahead.。
Eftychios D. SifakisAssistant ProfessorDepartment of Computer Sciences, University of Wisconsin – MadisonPersonal InformationHome Address:1101 Redan DrVerona, WI 53593Work Address:University of Wisconsin – Madison1210 West Dayton StreetComputer Sciences Building, Room 6355Madison, WI 53706Email : sifakis@Phone : 608-263-0475 (office)608-262-9777 (fax)Citizenship: Dual citizen (USA, Greece)Education[Sept 96 – June 00] B.Sc. in Computer Science, University of Crete, Greece[Sept 00 – June 02] B.Sc. in Mathematics, University of Crete, Greece[Sept 02 – June 07] Ph.D in Computer Science, Stanford UniversityDissertation title: “Algorithmic aspects of the simulation and control of computergenerated human anatomy models”Advisor: Ronald FedkiwProfessional experience[July 98 – June 00] Research intern, ICS-FORTH, Greece[July 00 – Aug 02] Research associate, University of Crete, Greece[Sept 00 – Aug 02] Military service, Greek Signal Corps [currently 2nd lieutenant in reserve][Mar 05 – July 07] Consultant – Intel Corporation (Santa Clara, CA)[Sept 07 – Dec 10] Post-doctoral scholar, University of California Los Angeles[Jan 08 – Aug 08] Consultant – SimQuest LLC (Boston, MA)[Jan 08 – today] Consultant – Walt Disney Animation Studios (Burbank, CA)[Jan 11 – today] Assistant Professor, University of Wisconsin – MadisonTeaching (as primary instructor)[Winter 2010] Math 151B: Applied Numerical MethodsDepartment of Mathematics, University of California Los Angeles [Spring 2011] CS412 : Introduction to Numerical MethodsDepartment of Computer Sciences, University of Wisconsin – Madison [Fall 2011] CS838 : Topics in Computer Science : Advanced Modeling and SimulationDepartment of Computer Sciences, University of Wisconsin – Madison [Spring 2012] CS412 : Introduction to Numerical MethodsDepartment of Computer Sciences, University of Wisconsin – Madison [Fall 2011] CS838 (Topics in CS) : Introduction to Physics-based Modeling & SimulationDepartment of Computer Sciences, University of Wisconsin – MadisonPublicationsT. Patterson, N. Mitchell and E. Sifakis, “Simulation of complex nonlinear elastic bodies using lattice deformers”, ACM SIGGRAPH Asia (to appear), 2012.Y. Zhu, Y. Wang, J. Hellrung, A. Cantarero, E. Sifakis, J. Teran, A Second-Order Virtual Node Algorithm for Nearly Incompressible Linear Elasticity in Irregular Domains, Journal of Computational Physics (to appear) 2012.A. McAdams, Y. Zhu, A. Selle, M. Empey, R. Tamstorf, J. Teran and E. Sifakis, “Efficient elasticity forcharacter skinning with contact and collisions”, ACM Transactions on Graphics (SIGGRAPH Proceedings) 30(4), pp.1-12, 2011J. Hellrung, L. Wang, E. Sifakis and J. Teran, "A Second-Order Virtual Node Method for Elliptic Problems with Interfaces and Irregular Domains in Three Dimensions", Journal of Computational Physics, In Press (DOI:10.1016/j.jcp.2011.11.023), 2011C. L. Richardson, J. Hegeman, E. Sifakis, J. Hellrung and J. Teran, “An XFEM method for modellinggeometrically elaborate crack propagation in brittle materials”, International Journal for Numerical Methods in Engineering, 88(10), pp. 1042-1065 (DOI: 10.1002/nme.3211), 2011A. McAdams, E. Sifakis and J. Teran, “A parallel multigrid Poisson solver for fluids simulation on large grids”,ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), edited by M. Otaduy and Z.Popovic, 2010J. Bedrossian, J. H. von Brecht, S. Zhu, E. Sifakis and J. Teran, “A Second Order Virtual Node Method for Elliptic Problems with Interfaces and Irregular Domains”, Journal of Computational Physics, 229, pp.6405-6426, 2010 [PDF]Y. Zhu, E. Sifakis, J. Teran and A. Brandt, “An efficient multigrid method for the simulation of high-resolution elastic solids”, ACM Transactions on Graphics 29(2), 2010 (presented at SIGGRAPH 2010)S.-H. Lee, E. Sifakis and D. Terzopoulos, “Comprehensive Biomechanical Modeling and Simulation of the Upper Body”, ACM Transactions on Graphics 28(4), 2009 (presented at SIGGRAPH 2010)A. McAdams, A. Selle, K. Ward, E. Sifakis and J. Teran, “Detail preserving continuum simulation of straighthair”, ACM Transactions on Graphics (SIGGRAPH Proceedings), 28(3), 2009J. Hellrung, A. Selle, A. Shek, E. Sifakis and J. Teran, “Geometric fracture modeling in BOLT, ACM SIGGRAPH Talks (Sketch), 2009E. Sifakis, J. Hellrung, J. Teran, A. Oliker and C. Cutting, “ Local Flaps: A Real-Time Finite Element BasedSolution to the Plastic Surgery Defect Puzzle”, Medicine Meets Virtual Reality 17, 2009E. Sifakis, S. Marino and J. Teran, “Globally coupled impulse-based collision handling for cloth simulation”,ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), edited by M. Gross and D.James (2008).E. Sifakis, K. Der and R. Fedkiw, “Arbitrary cutting of deformable tetrahedralized objects”, ACMSIGGRAPH/Eurographics Symposium on Computer Animation (SCA) edited by D. Metaxas and J.Popovic, 2007.E. Sifakis, T. Shinar, G. Irving and R. Fedkiw, “Hybrid simulation of deformable solids”, ACMSIGGRAPH/Eurographics Symposium on Computer Animation (SCA) edited by D. Metaxas and J.Popovic, 2007.C. J. Hughes, R. Grzeszczuk, E. Sifakis,D. Kim, S. Kumar, A. P. Selle, J. Chhugani, M. Holliman and Y.-K.Chen, “Physical Simulation for Animation and Visual Effects: Parallelization and Characterization for Chip Multiprocessors”, Intl. Symposium on Computer Architecture, 2007 (in press).E. Sifakis, “Algorithmic aspects of the simulation and control of computer generated human anatomy models”,PhD thesis, Stanford University, 2007.E. Sifakis, A. Selle, A. Robinson-Mosher and R. Fedkiw, “Simulating Speech with a Physics-Based FacialMuscle Model”, ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), edited by M.-P. Cani and J. O'Brien, 2006.J. Teran, E. Sifakis, S. Blemker, V. Ng Thow Hing, C. Lau and R. Fedkiw, “Creating and simulating skeletal muscle from the Visible Human Data Set”, IEEE Transactions on Visualization and Computer Graphics, 11, pp. 317-328, 2005.E. Sifakis, I. Neverov and R. Fedkiw, “Automatic Determination of Facial Muscle Activations from SparseMotion Capture Marker Data”, ACM Transactions on Graphics (SIGGRAPH proceedings), TOG 24, pp.415-425, 2005.J. Teran, E. Sifakis, G. Irving and R. Fedkiw, “Robust Quasistatic Finite Elements and Flesh Simulation”, ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA) edited by K. Anjyo and P.Faloutsos, pp. 181-190, 2005.E. Sifakis and R. Fedkiw, “Facial Muscle Activations from Motion Capture”, in video proceedings of IEEECVPR, 2005.S. Blemker, J. Teran, E. Sifakis, R. Fedkiw and S. Delp, “Fast 3D Muscle Simulations Using a New Quasistatic Invertible Finite-Element Algorithm”, International Symposium on Computer Simulation in Biomechanics, 2005.P. Sundaram, E. Sifakis , D. S. Paik, C. F. Beaulieu, S. Napel, “Fold Removal in CT Colonography (CTC): A Physics-based Approach”, Radiological Society of North America 91st Scientific Sessions, ScientificAssembly and Annual Meeting Program 2005:439, November, 2005.S. Liapis, E. Sifakis and G. Tziritas, “Color and texture segmentation using wavelet frame analysis, deterministic relaxation and fast marching algorithms”, Journal of Visual Communication and Image Representation, 15:1, pp. 1-26, March 2004.P. Sundaram, D.S. Paik, E.D. Sifakis, C.F. Beaulieu and S. Napel, “Selective Fold Removal in CT colonography using physically-based simulation,” Fifth International Symposium on Virtual Colonoscopy, Boston, MA October 28-29, 2004P. Sundaram, E. Sifakis, D.S. Paik, C.F. Beaulieu, S. Napel, “Physically based approach to removal of folds in CT Colonography: Proof of concept”, Radiological Society of North America 90th Scientific Sessions, November 2004.E. Sifakis and G. Tziritas, “Robust object boundary determination using a locally adaptive level set algorithm”,Intl. Conference on Image Processing, 2003.E. Sifakis and G. Tziritas, “Fast Marching techniques for Visual Grouping”, in Geometric Level Set Methods inImaging, Vision and Graphics, by S. Osher and N. Paragios (eds), Springer Verlag, July 2003.E. Sifakis, C. Garcia and G. Tziritas, “Bayesian level sets for image segmentation”, Journal of VisualCommunication and Image Representation, 13:1-2, pp. 44-64, March 2002.E. Sifakis, I. Grinias and G. Tziritas, “Video segmentation using fast marching and region growing algorithms”,EURASIP Journal on Applied Signal Processing, pp. 379-388, Apr 2002.E. Sifakis and G. Tziritas, “Moving object localization using a multi-label fast marching algorithm”, SignalProcessing: Image Communication, 16:10, pp. 963-976, 2001.E. Sifakis, I. Grinias and G. Tziritas, “Video segmentation using fast marching and region growing algorithms”,3rd European Workshop on Image Analysis for Multimedia Interactive Services, 2001 (invited).S. Liapis, E. Sifakis and G. Tziritas, “Color and/or texture segmentation using deterministic relaxation and fast marching algorithms”, Intl. Conference on Pattern Recognition, 2000.E. Sifakis and G. Tziritas, “Fast marching to moving object location”, Intl. Conference on Scale-SpaceTheories in Computer Vision, 1999.US PatentsP. Sundaram, D.S. Paik, E. Sifakis, C.F. Beaulieu, R. Fedkiw, S. Napel, “Selective Fold Removal in CT Colonography”, S04-212/ US Filed Oct. 15, 2004.A. McAdams, A. Selle, K. Ward, E. Sifakis and J. Teran, “Detail preserving Continuum Hair Simulation”,.02 (2010)R. Tamstorf, A. Selle, A. McAdams, J. Teran and E. Sifakis, “Efficient elasticity for character skinning with contact and collisions, P069110US-DAS/PROV, Filed May 2011.Technical reportsA. McAdams, A. Selle, R. Tamstorf, J. Teran and E. Sifakis, “Computing the Singular Value Decomposition of3x3 matrices with minimal branching and elementary floating point operations”, University of Wisconsin - Madison technical report TR1690, May 2011Invited Talks[Oct 06] “The challenges and future of Human Anatomy Simulation”, Stanford University[Apr 07] “The technological legacy of virtual human simulation”, UCLA[Apr 07] “Recent advances in physics-based simulation technology for face animation and virtual surgery”, UCLA Center for Advanced Surgical and Interventional Technology [Apr 07] “Faces, muscles and hybrid solids”, Digital Domain[May 07] “The technological legacy of virtual human simulation”, NYU[Oct 07] “Faces, muscles and hybrid solids”, Walt Disney Animation Studios[Nov 07] “Faces, muscles and CG flesh … What’s under the skin of a virtual human?”,UCLA CS201 Colloquium[Jan 08] “Flesh, muscles, cuts and finite elements: Common challenges in animation, biomechanics and virtual surgery”,UCLA/IPAM Workshop on Scientific Applications in Surgical Simulation of Soft Tissues [Apr 09] “Physics-based simulation and applications in biomechanics, medicine and special effects”, UCLA Engineering Tech Forum[Jun 09] “Dynamic digital faces and bodies: Challenges, applications and broader impact ofbiomechanical modeling and simulation technology”,USC Institute for Creative Technologies[Aug 09] “Dynamic digital faces and bodies: Challenges, applications and broader impact ofbiomechanical modeling and simulation technology”,Max Planck Institute for Informatics, Saarbruecken, Germany[Aug 09] “Dynamic digital faces and bodies: Challenges, applications and broader impact ofbiomechanical modeling and simulation technology”, ETH Zurich[Sep 09] “Simulation techniques for biomechanics and CG characters: Challenges, applications and broader impact”, California Institute of Technology[Feb 2010] “Digital doubles and the synergistic role of scientific computing, biomechanics andcomputer animation”, University of Wisconsin – Madison[Feb 2010] “Algorithmic and numerical aspects of simulating dynamic elastic solids for biomechanics, virtual surgery and visual effects”, University of California, Santa Barbara.[Feb 2010] “Dynamic digital faces and bodies: Challenges, applications and broader impact ofbiomechanical modeling and simulation technology”, University of California, Santa Barbara.[Mar 2010] “Digital doubles and the synergistic role of scientific computing, biomechanics andcomputer animation”, University of Southern California and Institute for CreativeTechnologies[Mar 2010] “Digital doubles and the synergistic role of scientific computing, biomechanics andcomputer animation”, University of Michigan – Ann Arbor[Apr 2010] “Digital doubles and the synergistic role of scientific computing, biomechanics andcomputer animation”, Columbia University[Apr 2010] “Digital doubles and the synergistic role of scientific computing, biomechanics andcomputer animation”, University of Utah[Oct 2010] “Dynamic digital faces and bodies: Challenges, applications and broader impact ofbiomechanical modeling and simulation technology”, UCLA AMS Fall Meeting [Feb 2011] “Computer-aided functional simulation of human anatomy: Challenges, emergingapplications and broader impact”, University of Wisconsin – Madison, School of Surgery。
Physics-based Modeling for Animation and Medical ApplicationsDimitris MetaxasV AST LaboratoryDepartment of Computer and Information ScienceUniversity of Pennsylvaniadnm@,/dnmAbstractIn this paper we present two instances of our physics-based modeling methodology that we have been developing for over10years.Thefirst method deals with modeling liq-uids for computer animation and is based on the solution of the Navier-Stokes equations on coarse grids usingfinite dif-ferences.The second method models the motion of the heart from MRI data based onfinite element theory.The method is capable of analyzing the heart’s motion in a clinically useful way.1.IntroductionThe development and use of Physics-Based Modeling (PBM)methods and techniques by many researchers has made it possible to address successfully,difficult problems in computer animation(e.g.,modeling of visco-elastic ma-terials andfluid phenomena)and medical imaging(e.g.,vi-sualization and analysis of heart motion),that were not pos-sible with purely geometric and kinematic techniques.PBM methods utilize geometry,kinematics,dynamics and mate-rial properties in order to model physical objects and their interactions with the physical world.Therefore,as opposed to purely geometric models,physics-based models incorpo-rate additional constraints(e.g.,material properties)that are very useful in both modeling and estimation applications.A unique feature of PBM is that it provides a unified method-ology for the modeling and estimation of rigid,articulated, and deformable models,and their motions.In this paper we will demonstrate two instances of our physics-based modeling methodology that has allowed us to modelfluid phenomena for computer animation and ana-lyze the motion of the heart for medical applications.2.Modeling Fluids for Computer AnimationSome of the most breathtaking animations in recent years have been generated by modeling the interaction be-tween light and water.Effects such as caustic shading,re-flection,refraction,and internal scattering have been ad-dressed in some detail,with realistic results.One charac-teristic of that work however,has been that the motion of the water surface is approximated by a non physics-based function.Such approximations cannot easily incorporate dynamic objects or buoyant effects into the model,because the velocity of thefluid is known only on the surface,and internal pressure is not calculated at all.Chen and Lobo go further towards a physics-basedfluid methodology by solv-ing a simplified form of the Navier-Stokes equations in two dimensions[2].However,they assume that thefluid has zero depth,and calculate the elevation of the surface solely from the instantaneous pressure.This allows them to per-form some interaction between moving objects and theflow field,but restricts the class of problems that can be solved using the method.Although the surface height is varied for animation,they treat thefluid as being completelyflat during the calculation.Therefore,convective wave effects, mass transport,and submerged obstacles are not covered by their technique.Comprehensive models offluid motion do exist,and there are a variety of tools for solving them in thefield of Computational Fluid Dynamics(CFD).These methods generally involve direct simulation techniques to get accu-ratefluid motion.Unfortunately,in any direct simulation technique the temporal resolution is strongly coupled to the spatial resolution.Thus,if the spatial resolution doubles, the temporal resolution must also be doubled so that the so-lution does not move more than one spatial sample per time step.This gives running times proportional to the fourth power of the resolution,so most of these techniques will scale poorly.Furthermore,an animator needs a fairly clear understanding of the system of equations being solved so that he or she can set initial and boundary conditions to getthe desired results.An idealfluid simulator for graphics ap-plications would apply the correct conditions automatically based on the underlying geometry.CFD methods also resist external control,making it difficult to force a particular mo-tion from afluid,unless it is a natural consequence of the system.These restrictions are an inherent part of thefluid modeling problem.The question arises whether it is possi-ble to accurately model realisticfluid motion while keeping within acceptable efficiency bounds for Computer Graph-ics.In our research[5,4,3]we have devised a solution to the Navier-Stokes equations for modeling liquid motion,that satisfies many of an animator’s needs.Realism is provided through afinite difference approximation to the incompress-ible Navier-Stokes equations.This gives rise to a complete pressure and velocity profile of the simulated environment. This profile is then used to determine the behavior of free surfaces,and is loosely coupled to the Lagrange equations of motion to include buoyant rigid objects into a scene.The range of behaviors accounted for include wave effects such as refraction,reflection and diffraction,together with rota-tional motion such as eddies and vorticity.Furthermore,ve-locity and pressure are strongly coupled within the model. This means that even the simplest animation exhibits sub-tle realistic behavior not available using previous computer-graphicsfluid models.Usability has also been a strong motivation for this work. The Navier-Stokes equations are solved over a coarse,rect-angular mesh containing an arbitrary distribution of sub-merged or semi-submerged obstacles.Boundary conditions for the mesh are generated automatically by constraining the free variables at an obstacle-fluid or air-fluid bound-ary.This low resolution calculation together with homo-geneous boundary conditions leads to a relatively efficient determination offluid velocity and internal pressure.Detail is achieved by using the velocityfield to concentrate atten-tion on regions of interest,i.e.,thefluid surface.The surface is represented as either a chain of massless marker particles, or a heightfield.The markers are carried around the mesh by convection,and can have arbitrary connectivity,account-ing for multiple colliding surfaces in a scene.Consideration is also given to controlling the overall be-havior of thefluid.Liquid sources or sinks(known as in-flow and outflow boundaries)can be included anywhere in the environment.They allow liquid toflow(or be forced) into a scene,orflow out at a natural rate.A time depen-dent pressurefield may also be applied to thefluid surface. Thus,the effects of a strong wind can be simulated and ini-tial waves be driven realistically.The output from the sys-tem is a polygonal surface or heightfield,both of which can be rendered using many of the techniques presented by researchers in recent years.[7,9,10,12].Fig.1shows a snapshot from a3D animation titled Moonlight Cove.A50x15x40mesh was used tofinely re-solve the effect of two large ocean waves crashing into a shallow cove.Submerged rocks,and an irregular sea bot-tom,focus the waves into the center of the cove,causing a number of interesting features on the water surface.The wave becomes steeper as the water depth decreases,and ed-dies and pressure waves appear to the left of,and behind the initial obstacle(Fig.1(b)).3.Heart Modeling for Clinical ApplicationsEstimating the volumetric shape,motion and deforma-tions of the heart’s left(LV)and right(RV)ventricles ac-curately and in a clinically useful way,is a very important yet open research problem.Conventional cardiac imaging methods(e.g.,MRI)still have many limitations,such as no explicit data correspondence between frames,and insuffi-cient resolution of the extracted data.In addition,most of the existing models for the analysis of the LV shape and motion are based on the use of parameters that are either too complex or too few to be used by a physician.Recently,a new magnetic resonance imaging(MRI) technique based on magnetic tagging(MRI-SPAMM)has been developed at the University of Pennsylvania for imag-ing of regional heart wall motion(Axel and Dougherty[1]). This fast,non-invasive technique promises to be very use-ful in the analysis of heart wall motion because it provides temporal correspondence of material points within the heart wall.This correspondence,in conjunction with the use of the three-dimensional(3D)location of each tagged datum, can subsequently be used as input to a motion analysis tech-nique to extract the three dimensional left ventricular mo-tion parameters.The motion parameters can then be statis-tically analyzed to explore their correlation with the various types of LV disease.In an effort to overcome the limited clinical usefulness of most existing models for analyzing the the heart we have developed a new class of deformable3D surface models whose deformations can be described with a small num-ber of intuitive parameters that are functions(Park et al.[11])instead of constants.These parameter functions com-prise an intelligent grouping into a small number of sets,of the many local parameters that are necessary to analyze the heart’s wall motions.An example of a parameter function is longitudinal contraction,which is the set of parameters describing the contraction of the LV from the apex to the base.Based on this new model,we can analyze both locally and globally the shape and motion of the heart in a way that is readily understood by a physician.This initial method has been extended recently to model the combined motion of the LV and the RV[6].Fig.2shows a color plot on the endocardial surface of the RV free wall of the minimum principal strains(E3)ateach time phase during systole.The associated principal directions were plotted at the midwall with lengths normal-ized according to the maximum contraction at those points.These strains were fairly uniform at end-systole,with an ex-tremum of about -0.3occuring towards the apex.It can be seen that the RV exhibited greater contraction (more nega-tive E3)in the free wall than in the septum.Also,the direc-tion of the contraction was more oblique to the short-axis image planes in the free wall than in theseptum.(a)(b)Figure 1.Moonlight Cove.Two ocean waves crash into a shallow cove.Pressure and ve-locity effects throughout the water volume manifest themselves at the surface (a,b).The tools we have developed allow the quantitative anal-ysis of the heart’s shape and motion and the visual represen-tation of the analysis results in a clinically useful ing these models we can quantitatively verify and visu-alize in 3D the knowledge about the heart that was qualita-tively known to physicians.4.ConclusionsWe have presented two examples of PBM methods capa-ble of modeling liquids and the motion of internal organs.These methods are a small subset of a variety of other PBM methods we have developed over the years for the solu-tion of other problems in computer graphics,medical image analysis and computer vision[8].timetime 1Time2Time 3Time 4Figure 2.Normal heart deformation:color plot of minimum principal strain on the RV endocardium as biventricular model deforms from end-diastole to end systole (initial phase +4time intervals).Red lines are the minimum principal strain directions drawn at the cen-ters of the elements,with lengths normalized by the strain magnitude.References[1]L.Axel and L.Dougherty,“Heart wall motion:Improved method of spatial modulation of magnetization for MR imag-ing”,Radiology ,172,pp.349-350,1989.[2]J.Chen,and N.Lobo,“Toward interactive-rate simulation of fluids with moving obstacles using the Navier-Stokes equa-tions”,Graphical Models and Image Processing,March 1995,pp.107–116,1995.[3]“Modeling Water for Computer Animation”.N.Foster and munications of the ACM,43(7),pp.60-67,July 2000.[4]N.Foster and D.Metaxas.“Animating hot turbulent gases”.In Proceedings of ACM SIGGRAPH’97conference,Los An-geles,CA,August 1997.[5]N.Foster and D.Metaxas.“Realistic Animation of Liquids”.Graphical Models and Image Processing,58(5),pp.471-483, 1996.[6]I.Haber,D.Metaxas and L.Axel.“A Method for Three-dimensional Motion Reconstruction and Analysis of the Right Ventricle Using Tagged MRI”.Medical Image Analysis,to ap-pear.[7]M.Kass and ler,“Rapid,stablefluid dynamics forcomputer graphics”,Computer Graphics(Proc.SIGGRAPH), 24(4),pp.49–57,1990.[8] D.Metaxas.“Physics-Based Deformable Models:Applica-tions to Computer Vision,Graphics and Medical Imaging”.Kluwer-Academic Publishers,1997.[9]T.Nishita,and E.Nakamae,“Method of displaying opticaleffects within water:Using the accumulation buffer”,Proc.of SIGGRAPH’94,(July1994),pp.24–29,1994.[10] D.Peachy,“Modeling waves and surf”,Proc.of SIGGRAPH’86,in Computer Graphics,20(3),pp.65–74,1986.[11]J.Park,D.Metaxas and L.Axel.“Analysis of Left Ventricu-lar Wall Motion Based on V olumetric Deformable Models and MRI-SPAMM”.Medical Image Analysis Journal,1(1),pp.53-71,March1996.[12]M.Watt,“Light-water interaction using backward beam trac-ing”,Proc.of SIGGRAPH’90,in Computer Graphics24,pp.377–385,1990.。