Centrum voor Wiskunde en Informatica Known-item retrieval on broadcast TV J.A. List, A.R. v
- 格式:pdf
- 大小:112.05 KB
- 文档页数:13
The Mogao Caves, nestled in the heart of Dunhuang, Gansu Province in China, are a treasure trove of ancient Chinese art and culture. These caves, also known as the Thousand Buddha Grottoes, are home to a vast array of exquisite murals that have stood the test of time, offering a glimpse into the rich history and religious beliefs of the past.The murals of the Mogao Caves are a testament to the artistic prowess and spiritual depth of the people who created them. Spanning over a thousand years, from the 4th to the 14th century, the murals depict a wide range of subjects, including Buddhist narratives, historical events, and scenes from daily life. The vibrant colors and intricate details of these paintings are a testament to the skill and dedication of the artists who labored to create them.One of the most striking features of the Mogao Caves murals is their use of color. The artists employed a rich palette of hues, from deep blues and greens to bright reds and yellows, to bring their subjects to life. The use of gold leaf adds a touch of opulence and grandeur to the paintings, reflecting the wealth and prosperity of the time.The murals also serve as a visual narrative of Buddhist teachings and stories. Scenes from the life of the Buddha, such as his birth, enlightenment, and death, are depicted in a series of panels, illustrating the key events and moral lessons of his life. The paintings also feature depictions of bodhisattvas, celestial beings, and other figures from Buddhist mythology, each with their own unique attributes and symbolic meanings.In addition to their religious significance, the Mogao Caves murals also provide a window into the daily life and customs of the people of ancient China. Scenes of farming, trade, and social gatherings offer a glimpse into the social structure and way of life of the time. The attention to detail in these scenes is remarkable, with even the smallest objects and figures rendered with precision and care.The preservation of the Mogao Caves murals is a testament to the efforts of countless individuals and organizations over the years. Despite the ravages of time, natural disasters, and human interference, these precious works of art have been carefully restored and protected. The use of advanced conservation techniques and the establishment of the Mogao Grottoes Research Institute have played a crucial role in ensuring the longevity of these cultural treasures.However, the Mogao Caves murals also face significant challenges in the modern era. Climate change, increased tourism, and environmental pollution pose threats to the delicate balance of the caves microclimate, potentially causing irreversible damage to the murals. Efforts are being made to mitigate these risks, such as limiting the number of visitors and implementing strict environmental controls within the caves.In conclusion, the Mogao Caves murals are a remarkable testament to the artistic and cultural heritage of ancient China. Their intricate designs, vibrant colors, and profound religious and historical significance make them a mustsee destination for anyone interested in exploring the richtapestry of Chinese history and art. As we continue to appreciate and preserve these treasures, we also bear the responsibility of ensuring their survival for future generations to marvel at and learn from.。
AMBULANT:A Fast,Multi-Platform Open Source SML Player Dick C.A. Bulterman, Jack Jansen, Kleanthis Kleanthous, Kees Blom and Daniel Benden CWI: Centrum voor Wiskunde en InformaticaKruislaan 4131098 SJ Amsterdam, The Netherlands +31 20 592 43 00 Dick.Bulterman@cwi.nl ABSTRACTThis paper provides an overview of the Ambulant Open SMIL player. Unlike other SMIL implementations, the Ambulant Player is a reconfigureable SMIL engine that can be customized for use as an experimental media player core.The Ambulant Player is a reference SMIL engine that can be integrated in a wide variety of media player projects. This paper starts with an overview of our motivations for creating a new SMIL engine then discusses the architecture of the Ambulant Core (including the scalability and custom integration features of the player).We close with a discussion of our implementation experiences with Ambulant instances for Windows,Mac and Linux versions for desktop and PDA devices.Categories and Subject Descriptors H.5.1 Multimedia Information Systems [Evaluation]H.5.4 Hypertext/Hypermedia [Navigation]. General TermsExperimentation, Performance, V erification KeywordsSMIL, Player, Open-Source, Demos1.MOTIV ATIONThe Ambulant Open SMIL Player is an open-source, full featured SMIL 2.0 player. It is intended to be used within the researcher community (in and outside our institute) in projects that need source code access to a production-quality SMIL player environment. It may also be used as a stand-alone SMIL player for applications that do not need proprietary mediaformats.The player supports a range of SMIL 2.0 profiles ( including desktop and mobile configurations) and is available in distributions for Linux, Macintosh, and Windows systems ranging from desktop devices to PDA and handheld computers. While several SMIL player implementationsexist,including the RealPlayer [4], InternetExplorer [5], PocketSMIL [7],GRiNS [6],X-SMILES [8] and various proprietary implementations for mobile devices, we developed Ambulant for three reasons:Permission to make digital or hard copiesof all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish,to post on servers or to redistribute tolists,requires prior specific permissionand/or a fee.'MM' 04, October 10-16, 2004, New Y ork, New Y ork, USA.Copyright 2004 ACM 1-58113-893-8/04/0010...$5.00.•N one of the existi ng SMIL players provides a complete and correct SMIL 2.0 implementation. The Ambulant player implements all of SMIL, based on the SMIL 2.0 Language profile plus extensions to support advanced animation and the needs of the mobile variant used by the 3GPP/PSS-6 SMIL specification [9]. •A ll commercial SMIL players are geared to the presentation of proprietary media. The Ambulant player uses open-source media codecs and open-source network transfer protocols, so that the player can be easily customized foruse in a wide range of researchprojects.• Our goal is to build a platform that will encourage the development of comparable multimedia research output.By providing what we expect will be a standard baseline player, other researchers and developmentorganizations can concentrate on integratingextensions to the basic player (either in terms of new media codecs or new network control algorithms). These extensions can then be shared by others.In contrast to the Helix client architecture [10], which also moved to a GPL core in mid-2004, the Ambulant player supports a wider range of SMIL target application architectures,it provides a more complete and correct implementation of the SMIL language,it provides much better performance on low-resource devices and it provides a more extensible media player architecture. It also provides an implementation that includes all of the media codecs as part of the open client infrastructure.The Ambulant target community is not viewers of media content, but developers of multimedia infrastructures, protocols and networks. Our goal has been to augument the existing partial SMIL implementations produced by many groups with a complete implementation that supports even the exotic features of the SMIL language. The following sections provide an introduction to the architecture of the player and describe the state of the various Ambulant implementations. We then discuss how the Ambulant Core can be re-purposed in other projects. We start with a discussion of Ambulant 's functional support for SMIL.2.FUNCTIONAL SUPPORT FOR SMIL 2.0The SMIL 2.0 recommendation [1] defines 10 functional groups that are used to structure the standard '5s0+ modules. These modules define the approximately 30 XML elements and 150 attributes that make up the SMIL 2.0 language. In addition to defining modules, the SMIL 2.0 specification also defines a number of SMIL profiles: collection of elements, attributes and attribute values that are targeted to meet the needs of a particular implementation community. Common profiles include the full SMIL 2.0 Language, SMIL Basic, 3GPP SMIL,XHTML+SMIL and SMIL 1.0 profiles.A review of these profiles is beyond the scope of this paper(see [2]), but a key concern of Ambulant ' sdevelopment has been to provide a player core that can be used to support a wide range of SMIL target profiles with custom player components.This has resulted in an architecture that allows nearly all aspects of the player to be plug-replaceable via open interfaces. In this way, tailored layout, scheduling, media processing and interaction modules can be configured to meet the needs of individual profile requirements. The Ambulant player is the only player that supports this architecture.The Ambulant player provides a direct implementation of the SMIL 2.0 Language profile, plus extensions that provide enhanced support for animation and timing control. Compared with other commercial and non-commercial players, the Ambulant player implements not only a core scheduling engine, it also provides complete support for SMIL layout,interaction, content control and networking facilities.Ambulant provides the most complete implementation of the SMIL language available to date.3.AMBULANT ARCHITECTUREThis section provides an overview of the architecture of the Ambulant core. While this discussion is high-level, it will provide sufficient detail to demonstrate the applicability of Ambulant to a wide range of projects. The sections below consider thehigh-level interface structure, the common services layer and the player com mon core architecture.3.1The High-Level Interface StructureFigure 1 shows the highest level player abstract ion. The player core support top-level con trol exter nal entry points (in clud ing play/stop/pause) and in turn man ages a collection of external factories that provide in terfaces to data sources (both for sta ndard and pseudo-media), GUI and window system interfaces and in terfaces to ren derers. Unlike other players that treat SMIL as a datatype [4],[10], the Ambula nt en gi ne has acen tral role in in teractio n with the input/output/scree n/devices in terfaces.This architecture allows the types of entry points (and the moment of evaluation) to be customized and separated from the various data-sources and renderers. This is important forintegration with environments that may use non-SMIL layout or special device in terface process ing.Figuit 1 k Ambulaittliigk-ljtwLstruchm.3.2The Common Services LayerFigure 2 shows a set of com mon services that are supplied for the player to operate. These in clude operati ng systems in terfaces, draw ing systems in terfaces and support for baseli ne XML fun ctio ns.All of these services are provided by Ambulant; they may also be integrated into other player-related projects or they may be replaced by new service components that are optimized for particular devices or algorithms. Hsurt 2. Amldant Common [Services Liwr/3.3The Player Common CoreFigure 3 shows a slightly abstracted view ofthe Ambula nt com mon core architecture. The view is essentially that of a single instanceof the Ambula nt player. Although only oneclass object is shown for eachservice,multiple interchangeable implementations have been developed for all objects (except the DOM tree) during theplayer 'development. As an example,multiple schedulers have bee n developed to match the fun cti onalcapabilities of various SMIL profiles.Arrows in the figure denote that one abstract class depends on the services offered by the other abstract class. Stacked boxes denote that a si ngle in sta nce of the player will con tain in sta nces of multiple con crete classes impleme nting that abstract class: one for audio, one for images, etc. All of the stacked-box abstract classes come with a factory function to create the in sta nces of the required con crete class.The bulk of the player implementation is architected to be platform in depe ndent. As we will discuss, this platform in depe ndent component has already been reused for five separate player impleme ntati ons. The platform dependent portions of the player include support for actual ren deri ng, UI in teract ion and datasource processing and control. When the player is active, there is asingle instanee of the scheduler and layout manager, both of which depend on the DOM tree object. Multiple instances of data source and playable objects are created. These in teract with multiple abstract rendering surfaces. The playable abstract class is the scheduler in terface (play, stop) for a media no de, while the renderer abstract class is the drawing in terface (redraw). Note that not all playables are ren derers (audio, SMIL ani mati on). The architecture has bee n desig ned to have all comp onents be replaceable, both in terms of an alter native impleme ntati on of a give n set of functionality and in terms of a complete re-purposing of the player components. In this way, the Ambulant core can be migrated to being a special purpose SMIL engine or a non-SMIL engine (such as support for MPEG-4 or other sta ndards).The abstract in terfaces provided by the player do not require a “ SMIL on Top” model of docume nt process ing. The abstract in terface can be used with other high-level control 4.1 Implementation PlatformsSMIL profiles have been defined for a widerange of platforms and devices, ranging fromdesktop implementations to mobile devices. Inorder to support our research on distributedmodels (such as in an XHTML+SMIL implementation), or to control non-SMILlower-level rendering (such as timed text).Note that in order to improve readability of theillustrati on, all auxiliary classes (threadi ng, geometry and color han dli ng, etc.) and several classes that were not important for general un dersta nding (player driver engine, transitions, etc.) have been left out of the diagram.4. IMPLEMENTATION EXPERIENCESThis sectio nwill briefly review ourimpleme ntatio n experie nces with theAmbula nt player. We discuss the implementation platforms used during SMIL ' s development and describe a set of test documents that were created to test the fun cti on ality of the Ambula nt player core. We con clude with a discussi on on the performa nee of the Ambula nt player.SMIL document extensions and to provide a player that was useful for other research efforts, we decided to provide a wide range of SMIL implementations for the Ambulant project. The Ambulant core is available as a single C++ source distribution that provides support for the following platforms:•Linux: our source distributi on in elude makefiles that are used with the RH-8 distribution of Linux. We provide support for media using the FF-MPEG suite [11]. The player interface is built using the Qt toolkit [12]. •Macintosh:Ambulant supports Mac OS X 10.3. Media rendering support is available via the internal Quicktime API and via FF-MPEG . The player user interface uses standard Mac conventions and support (Coca). •Windows: Ambulant provides conventional Win32 support for current generation Windows platforms. It has been most extensivelytested with XP (Home,Professional and TabletPC) and Windows-2000. Media rendering include third-party and local support for imaging and continuous media. Networking and user interface support are provided using platform-embeddedlibraries.•PocketPC: Ambulant supports PocketPC-2000,PocketPC-2002andWindows Mobile 2003 systems. The PocketPC implementations provide support for basic imaging, audio and text facilities.•Linux PDA support:Ambulant provides support for the Zaurus Linux-PDA. Media support is provided via the FF-MPEG library and UI support is provide via Qt. Media support includes audio, images and simple text.In each of these implementations, our initial focus has been on providing support for SMIL scheduling and control functions.We have not optimized media renderer support in the Ambulant 1.0 releases, but expect to provide enhanced support in future versions. 4.2 Demos and Test SuitesIn order to validate the Ambulant player implementation beyond that available with the standard SMIL test suite [3], several demo and test documents have been distributed with the player core. The principal demos include: •Welcome: A short presentation that exercises basic timing,media rendering, transformations and animation.•NYC: a short slideshow in desktop and mobile configurations that exercises scheduling, transformation and media rendering.•News: a complex interactive news document that tests linking, event-based activation, advanced layout, timing and media integration. Like NYC, this demo support differentiated mobile and desktop configurations.•Links: a suite of linking and interaction test cases.•Flashlight: an interactive user'sguide that tests presentation customization using custom test attributes and linking/interaction support. These and other demos are distributed as part of the Ambulant player web site [13].4.3Performance EvaluationThe goal of the Ambulant implementation was to provide a complete and fast SMIL player. We used a C++ implementation core instead of Java or Python because our experience had shownthat on small devices (which we feel hold significant interest for future research), the efficiency of the implementation still plays a dominant role. Our goal was to be able to read, parse, model and schedule a 300-node news presentation in less than two seconds on desktop and mobile platforms. This goal was achieved for all of the target platforms used in the player project. By comparison, the same presentation on the Oratrix GRiNS PocketPC player took 28 seconds to read, parse and schedule. (The Real PocketPC SMIL player and the PocketSMIL players were not able to parseand schedule the document at all because of their limited SMIL language support.)In terms of SMIL language performance, our goal was to provide a complete implementation of the SMIL 2.0 Language profile[14]. Where other players have implemented subsets of this profile,Ambulant has managed to implement the entire SMIL 2.0 feature set with two exceptions: first, we currently do not support the prefetch elements of the content control modules; second, we provide only single top-level window support in the platform-dependent player interfaces. Prefetch was not supported because of the close association of an implementation with a given streaming architecture. The use of multiple top-level windows, while supported in our other SMIL implementation, was not included in version 1.0 of Ambulant because of pending working on multi-screen mobile devices. Both of these feature are expected to be supported in the next release of Ambulant.5.CURRENT STATUS AND AVAILABILITYT his paper describes version 1.0 of the Ambulant player, which was released on July 12, 2004. (This version is also known as the Ambulant/O release of the player.) Feature releases and platform tuning are expected to occur in the summer of 2004. The current release of Ambulant is always available via our SourceForge links [13], along with pointers to the most recent demonstrators and test suites.The W3C started its SMIL 2.1 standardization in May, 2004.At the same time, the W3C' s timed text working group is completing itsfirst public working draft. We will support both of these activities in upcoming Ambulant releases.6.CONCLUSIONSWhile SMIL support is becoming ubiquitous (in no small part due to its acceptance within the mobile community), the availability of open-source SMIL players has been limited. This has meant that any group wishing to investigate multimedia extensions or high-/low-level user or rendering support has had to make a considerable investment in developing a core SMIL engine.We expect that by providing a high-performance, high-quality and complete SMIL implementation in an open environment, both our own research and the research agendas of others can be served. By providing a flexible player framework, extensions from new user interfaces to new rendering engines or content control infrastructures can be easily supported.7.ACKNOWLEDGEMENTS This work was supported by the Stichting NLnet in Amsterdam.8.REFERENCES[1]W3C,SMIL Specification,/AudioVideo.[2]Bulterman,D.C.A and Rutledge, L.,SMIL 2.0:Interactive Multimedia for Weband Mobile Devices, Springer, 2004.[3]W3C,SMIL2.0 Standard Testsuite,/2001/SMIL20/testsuite/[4]RealNetworks,The RealPlayer 10,/[5]Microsoft,HTML+Time in InternetExplorer 6,/workshop/author/behaviors/time.asp[6]Oratrix, The GRiNS 2.0 SMIL Player./[7]INRIA,The PocketSMIL 2.0 Player,wam.inrialpes.fr/software/pocketsmil/. [8],X-SMILES: An Open XML-Browser for ExoticApplications./[9]3GPP Consortium,The Third-GenerationPartnership Project(3GPP)SMIL PSS-6Profile./ftp/Specs/archive/26_series/26.246/ 26246-003.zip[10]Helix Community,The Helix Player./.[11]FFMPEG ,FF-MPEG:A Complete Solution forRecording,Converting and Streaming Audioand Video./[12]Trolltech,Qtopia:The QT Palmtop/[13]Ambulant Project,The Ambulant 1.0 Open Source SMIL 2.0Player, /.[14]Bulterman,D.C.A.,A Linking andInteraction Evaluation Test Set for SMIL,Proc. ACM Hypertext 2004, SantaCruz,August, 2004.。
python教程微盘网盘全集大全想学会编程语言无法一蹴而就,Python语言也不例外。
网上Python资料和视频也不少,但这些资料零散且不系统,遇到问题也让学习积极性非常受挫。
若大家想成为技术型的数据分析师,或者未来往数据挖掘、系统性开发或二次开发方向发展,接下来带给你的就是系统学习python教程微盘网盘全集。
千锋Python基础教程:/s/1qYTZiNEPython课程教学高手晋级视频总目录:/s/1hrXwY8kPython课程windows知识点:/s/1kVcaH3xPython课程linux知识点:/s/1i4VZh5bPython课程web知识点:/s/1jIMdU2iPython课程机器学习:/s/1o8qNB8QPython课程-树莓派设备:/s/1slFee2TPython发源于八十年代后期。
开发者是Centrum Wiskunde & Informatica的Guido van Rossum,这是位于荷兰阿姆斯特丹科学园区的一个数学和计算机科学研究中心。
之后Van Rossum一直是Python开发很有影响的人物。
事实上,社区成员给了他一个荣誉称号:终生仁慈独裁者(BDFL)。
经过初期的不起眼,Python已经成为互联网最流行的服务端编程语言之一。
根据W3Techs的统计,它被用于很多的大流量的站点,超过了ColdFusion, PHP, 和。
其中超过98%的站点运行的是Python 2.0,只有1%多一点的站点运行3.0。
关于如何学习Python语言,实质上,任何语言的核心内容都不多,Python显得更精简一些。
但是就这些内容,要用好也很不容易,而从数据类型、类,异常入手是最基本方法。
如果你需要更系统地学习Python语言,不妨来千锋Python培训班看看,负责教学的Python讲师,可以说是业界无可挑剔的讲师天团。
尹老师,刘老师,杨老师均是清华大学毕业的高材生,精通多门编程语言,拥有丰富的开发经验,多年IT名企工作经验以及丰富的项目实战经验。
·4587·[32]SHIROBE M,WATANABE Y,TANAKA T,et al. Effect ofan oral frailty measures program on community-dwelling elderly people:a cluster-randomized controlled trial[J]. Gerontology,2021:1-10. DOI:10.1159/000516968.[33]MATSUO K,KITO N,OGAWA K,et al. Improvement oforal hypofunction by a comprehensive oral and physical exercise programme including textured lunch gatherings[J]. J Oral Rehabil,2021,48(4):411-421. DOI:10.1111/joor.13122.[34]NOMURA Y,ISHII Y,SUZUKI S,et al. Nutritional status andoral frailty :a community based study [J]. Nutrients,2020,12(9):E2886. DOI:10.3390/nu12092886.[35]DIBELLO V,LOZUPONE M,MANFREDINI D,et al. Oralfrailty and neurodegeneration in Alzheimer 's disease[J]. Neural Regen Res,2021,16(11):2149-2153. DOI:10.4103/1673-5374.310672.[36]HIRONAKA S,KUGIMIYA Y,WATANABE Y,et al. Associationbetween oral,social,and physical frailty in community-dwelling older adults[J]. Arch Gerontol Geriatr,2020,89:104105. DOI:10.1016/j.archger.2020.104105.[37]BABA H,WATANABE Y,MIURA K,et al. Oral frailty andcarriage of oral Candida in community-dwelling older adults(Check-up to discover Health with Energy for senior Residents in Iwamizawa ;CHEER Iwamizawa)[J]. Gerodontology,2022,39(1):49-58. DOI:10.1111/ger.12621.[38]HIHARA T,GOTO T,ICHIKAWA T. Investigating eatingbehaviors and symptoms of oral frailty using questionnaires[J]. D e n t J (B a s e l ),2019,7(3):E 66. D O I :10.3390/dj7030066.[39]NISHIMOTO M,TANAKA T,TAKAHASHI K,et al. Oral frailtyis associated with food satisfaction in community-dwelling older adults[J]. Nihon Ronen Igakkai Zasshi,2020,57(3):273-281. DOI:10.3143/geriatrics.57.273.[40]HATANAKA Y,FURUYA J,SATO Y,et al. Associationsbetween oral hypofunction tests,age,and sex[J]. Int J Environ Res Public Health,2021,18(19):10256. DOI:10.3390/ijerph181910256.[41]OHARA Y,MOTOKAWA K,WATANABE Y,et al. Associationof eating alone with oral frailty among community-dwelling older adults in Japan[J]. Arch Gerontol Geriatr,2020,87:104014. DOI:10.1016/j.archger.2020.104014.(收稿日期:2022-03-10;修回日期:2022-05-06)(本文编辑:康艳辉)·新进展·合并危险因素的高钾血症诊治进展罗培艺,马良,苟慎菊*【摘要】 高钾血症是临床上的常见问题,其发生的危险因素包括患有肾脏疾病、心血管疾病、糖尿病以及服用影响血钾的药物等。
INODOROS Y MUEBLES DE BAÑO El presente certificado cubre cualquier defecto de fabricación del producto que pueda afectar su desempeño y en cumplimiento de lo establecido en el manual de instalación, uso y mantenimiento.CENTRO CERAMICO LAS FLORES SAC, certifica la garantía de los sanitarios de la marca Origin, Golden Bath, Bravat, Trébol Platinum y Noken según las siguientes condiciones: presentación del documento de venta, factura o boleta (original o virtual).2.Si presenta defectos de fabricación del producto y no por fallas que se presenten como consecuencia de lainadecuada instalación, reparación, uso o un mal mantenimiento. Garantía sólo cubre el valor del componente a cambiar, si fuese necesario se hará cambio total del producto, no cubre mano de obra traslado, ni otros gastos por remover o instalar productos.4.En caso se reportara alguna deficiencia del sanitario o mueble, comunicarse con el área de Servicio Técnico de CENTRO CERAMICO LAS FLORES SAC quien verificará a fin de establecer la idoneidad y calidad del producto. Garantía no contempla rajaduras ni roturas originadas por golpe que se den en el transporte y/o almacenaje inadecuado.Es muy importante que revise su mercadería al momento de recibirla dando su conformidad de la misma. Garantía fuera de Lima Metropolitana aplica en la ciudades donde se encuentran las tiendas Cassinelli; en otras ciudades la visita técnica a solicitud del cliente se realizará previa cotización y la validación del cliente a asumir el costo que demande la visita hacia la ciudad donde se encuentre instalado el producto. En caso el producto presentara falla de fabricación se aplicará lagarantía según el punto número 3, brindando la atención en tienda.7.No colocar peso excesivo y/o subirse sobre el mueble y tampoco sobre los inodoros.8.No exponer el producto directamente al sol.9.Evite contacto directo del agua o cualquier otro liquido sobre el mueble de baño para evitar la filtración en el interior de la estructura y se produzca hinchamiento.10.Evitar golpear bruscamente al cerrar las puertas y cajones.11.Evitar arrojar papeles dentro del inodoro.Tiempos de Cobertura:El presente certificado de garantía para one pieces, inodoros, bowls,lavatorios y muebles de baño tendrán vigencia a partir de la fecha de compra del producto indicado en su documento de compra.INFORMACION A CONSIDERARLos sanitarios presentarán curaciones ( parches ) en algunas zonas no visibles de la loza (inodoros o lavatorios).En el proceso de fabricación a altas temperaturas se hallan orificios que son necesarios para la ventilación y pequeñas fisuras que se dan en el proceso de enfriamiento tanto en la base interna y tanque del inodoro e igual forma por debajo del lavatorio.Tener en consideración que la loza sanitaria puede presentar defectos de esmalte permisible según norma técnica de fabricación.no debiendo ser visible a más de 1 metro de distancia de la zona frontal y lateral del producto.TREBOL PLATINUM ORIGIN De por Vida (defecto Fábrica)Accesorio interno (3 años)Complementos (1 año)GOLDEN BATH De por Vida (defecto Fábrica)Accesorio interno (1 año) Complementos(6 meses)3 años1 año Accesorio interno (3 años)Complementos (1 año)De por Vida (defecto Fábrica)LAVATARIOS Y MUEBLESMueble Golden Bath 3 años 6 mesesMueble Trébol Platinum 3 años1 año Mueble Origin CERTIFICADO DE GARANTIASANITARIOS Y MUEBLES DE BAÑOBRAVAT MARCA TABLEROGABINETE LOZA SANITARIA MARCA 2 años10 años NOKEN TIEMPOS DE GARANTIA DE SANITARIOS1 año5 años ACCESORIO Y COMPLEMENTOInspiramos el cambio en tuCERTIFICADO DE GARANTIA INODOROS Y MUEBLES DE BAÑOINSTALACION 1.Contratar personal calificado para la instalación de los productos adquiridos.2.Revisar los componentes del producto antes de la instalación (hacer lectura del manual propio de instalación o video brindado).3.Antes de instalar el inodoro se debe verificar si la medida del eje del punto de desagüe de piso o pared va acorde con la del inodoro,asimismo considerar la medida del punto de agua.4.El tubo de desagüe y la toma de agua, tanto interior como exterior, deben encontrarse libres de impurezas y residuos de cemento que puedan atorar el drenaje de agua.5.Para una instalación de inodoro al piso se debe marcar la posición de los pernos de anclaje en la superficie de forma paralela a la pared terminada, posteriormente perforar e instalar los pernos.6.Colocar el anillo de cera alrededor del desagüe del inodoro luego asentar el inodoro al piso y se asegura en la posición definitiva colocando los pernos de anclaje y ajustando a la loza. El anillo debe formar un sello entre el piso y el inodoro para evitar la salida de malos olores y filtraciones de agua. Hacer pruebas de funcionamiento antes de sellar. fijación del borde de la taza al piso o lavatorios de loza, debe realizarse con alguna silicona neutra para uso sanitario.Este producto permite desmontar la loza sin quebrarla (no usar productos que imposibiliten el desmontaje)MANTENIMIENTOSANITARIOS*Para la limpieza de la loza sanitaria no debe usarse materiales abrasivos como escobilla de alambre o similares que pueden deteriorar el acabado de la loza sanitaria.*Combinar los químicos de limpieza con agua antes de su aplicación para evitar dañar el acabado cromado de las bisagras decorativas,pulsadores o el mismo asiento.MUEBLES DE BAÑO*Los lavatorios de loza deben ser limpiados con agua y jabón líquido utilizando una esponja suave.*Posteriormente secar con un paño para evitar manchas.*Evitar acumulación de agua sobre el mueble, podría generar hinchamiento en su estructura..*Limpiar el mueble únicamente con paño ligeramente húmedo.*No aplicar detergentes y evitar la limpieza con esponjas abrasivas ya sea en el lavatorio y el propio mueble de baño.*Es importante la limpieza diaria de la loza para evitar manchas en la superficie (se recomienda utilizar bicarbonato de sodio y vinagre blanco para eliminar la mancha en la loza).Los datos personales que usted proporciona serán utilizados y/o tratados por Centro Cerámico Las Flores SAC estricta y únicamente a efectos de brindarle atención personalizada para la gestión de una posible solución del inconveniente reportado, así como para la acreditación de la atención del mismo. Centro Cerámico Las Flores SAC podrá compartir y/o usar y/o almacenar y/o transferir su información a terceras personas vinculadas o no a Centro Cerámico Las Flores SAC. sean estos socios comerciales o no de Centro Cerámico Las Flores SAC, con el objeto de realizar las actividades relacionadas a la atención post ventas y/o servicio técnico solicitado. Usted podrá ejercer en cualquier momento su derecho de información, acceso, rectificación, cancelación y oposición de sus datos de acuerdo a lo dispuesto por la Ley de Protección de Datos Personales, vigente y su Reglamento. Para más información en la web ó por correo a ********************************.Nombre de Titular de Compra ………………………………………………………………………………………………………………………………………………………………………..Numero de Boleta o Factura ………………………………………....……………………………………………………….Fecha de Compra …………………………………….DNI ………………………………………………………………………………………………………………………………..……….Firma ………………………………………………………….Nombre de Contacto ………………………………………………………………………………………………………..…….Firma de Contacto ………………………………………INSTALACION Y MANTENIMIENTO DE INODOROS Y MUEBLES DE BAÑOServicio Técnico CCLFTeléfono 0800-1-2150 / 241-5753 / 243-2242/ 241-5746Contáctenosvíaweb:******************************************************Inspiramos el cambio en tu hogar INSTALACION Y MATENIMIENTODE INODOROS Y MUEBLES DE BAÑO。
西⽅经典书⽬汇总1.《尼各马可伦理学(The Nicomachean Ethics of Aristotle)》作者:亚⾥⼠多德(Aristotle)分类号:2.《道德情操论(The Theory of Moral Sentiments)》作者:亚当·斯密(Adam Smith)分类号:B82-02/S6423.《论⼈类不平等的起源和基础(A Discourse on Inequality)》作者:卢梭()分类号:D081/R8644.《实践理性批判(Critique of Practical Reason)》作者:康德(Kant)分类号:K16 5.《道德形⽽上学基础(Foundations of the Metaphysics of Morals)》作者:康德(Kant)分类号:K166.《功利主义(Utilitarianism)》作者:穆勒(Mill)分类号:D081/R8647.《伦理学原理(Principia Ethica)》作者:摩尔()分类号:M8228.《正义论(A Theory of Justice)》作者:罗尔斯(John Rawls)分类号:9.《⽆政府、国家与乌托邦(Anarchy, State and Utopia)》作者:诺齐克(Robert Nozick)分类号:B82-069/N961 10.《追寻美德(After Virtue)》作者:麦⾦太尔(Alasdair MacIntyre)哲学:1.《理想国(Republic)》作者:柏拉图(Plato)分类号:P7182.《形⽽上学(Metaphysics)》作者:亚⾥⼠多德 (Aristotle)分类号:A7163.《诗学(On the Art of Poetry)》作者:亚⾥⼠多德 (Aristotle)分类号:A716 4.《诗艺(On the Art of Poetry)》作者:贺拉斯(Horace)分类号:A7165.《论崇⾼(On the Sublime Longinus)》作者:朗吉奴斯(Longinus)分类号:A716 6.《第⼀哲学沉思录(Meditations on First Philosophy)》作者:笛卡尔(Rene Descartes)分类号:D4457.《思想录(Pascal’s Penssees)》作者:帕斯卡尔(Blaise Pascal)分类号: D445 8.《⼈性论(A Treatise of Human Nature)》作者:休谟(David Hume)分类号:H921 9.《纯粹理性批判(Critique of Pure Reason)》作者:康德(Kant)分类号:K16 10.《判断⼒批判(Critique of Judgment)》作者:康德(Kant)分类号:K1611.《精神现象学(The Phenomenology of Mind)》, 作者:⿊格尔(Hegel)分类号:H462 12.《⼩逻辑(The Logic of Hegel)》作者:⿊格尔(Hegel)分类号:H46213.《作为意志和表象的世界(The World as Will and Representation)》, 作者:叔本华(Schopenhauer)分类号:S373 14.《查拉图斯特拉如是说(Thus Spake Zarathustra)》作者:尼采(Friedrich Nietzsche)分类号:N67715.《⾮此即彼(Either/Or)》, 作者:克尔凯郭尔(Kierkegaard)分类号:B534/K47 16.《普通语⾔学教程(Course in General Linguistics)》作者:索绪尔( Saussure)分类号:H0/S25517.《纯粹现象学导论(Ideas: General Introduction to Pure Phenomenology)》作者:胡塞尔(Edmund Husserl)分类号:H97218.《逻辑哲学论(Tractatus Logico Philosophicus)》作者:维特根斯坦()分类号:B521/W83119.《哲学研究(Philosophical Investigations)》作者:维特根斯坦()分类号:B521/W831 20.《存在与时间(Being and Time)》作者:海德格尔(Martin Heidegger)分类号:H465 21.《诗·语⾔·思(Poetry, Language, Thought)》作者:海德格尔(Martin Heidegger)分类号:H46522.《存在与虚⽆(Being and Nothingness)》作者:萨特(Jean-Paul Sartre)分类号:S24923.《真理与⽅法(Truth and Method)》作者:伽达默尔(Hans-Georg Gadamer)分类号:G12324.《科学⾰命的结构(The Structure of Scientific Revolutions)》作者:库恩()分类号:K9625.《性经验史(The History of Sexuality)》作者:福柯()分类号:K96宗教学:1.《忏悔录(Confessions)》作者:圣·奥古斯丁()分类号:《托马斯·阿奎那要籍选(Basic Writings of Saint ThomasAquinas)》, 作者:阿奎那( Aquinas)分类号:A6473.《迷途指津(The Guide for the Perplexed)》作者:马蒙尼德(Maimonides)分类号:B985/M2234.《路德基本著作选(Basic Theological Writings)》作者:马丁·路德(Martin Luther)分类号:L9735.《论宗教(On Religion)》作者:施莱尔马赫()分类号:B972/S3416.《我与你(I and Thou)》作者:马丁·布伯(Martin Buber)分类号:B972/S341 7.《⼈的本性及其命运(The Nature and Destiny of Man)》作者:尼布尔()分类号:B972/N6658.《神圣者的观念(The Idea of the Holy)》作者:奥托(Rudolf Otto)分类号:B972/O89 9.《存在的勇⽓(The Courage to Be)》作者:梯利希(Paul Tillich)分类号:B972/O89 10.《教会教义学(Church Dogmatics)》作者:卡尔·巴特(Karl Barth)分类号:B921/B284政治学:1.《政治学(The Politics of Aristotle)》作者:亚⾥⼠多德 (Aristotle)分类号:A7162.《君主论(The Prince)》作者:马基雅维⾥(Niccolo Machiavelli)分类号:D033/M149 3.《社会契约论(The Social Contract)》作者:卢梭()分类号:D033/M1494.《利维坦(Leviathan)》作者:霍布斯(Thomas Hobbes)分类号:D033/H682 5.《政府论(Two Treatises of Government)作者:洛克(John Locke)分类号:L814 6.《论法的精神(The Spirit of the Laws)》, 作者:孟德斯鸠(Montesquieu)分类号:M7797.《论美国民主(Democracy in America)》, 作者:托克维尔(Alexis de Tocqueville)分类号:T6328.《代议制政府(Considerations on RepresentativeGovernment)》作者:穆勒(Mill)分类号:D033/M6459.《联邦党⼈⽂集(The Federalist Papers)》作者:汉密尔顿(Alexander Hamilton)分类号:H21710.《⾃由秩序原理(The Constitution of Liberty)》作者:哈耶克()分类号:D089/H417经济学:1.《国民财富的性质和原因的研究(An Inquiry into the Nature and Causes of the Wealth of Nations)》, 作者:亚当·斯密(Adam Smith)分类号:S6422.《经济学原理(Principles of Economics)》, 作者:马歇尔(Alfred Marshall)分类号:M3673.《福利经济学(The Economics of Welfare)》, 作者:庇古()分类号:P633 4.《就业、利息与货币的⼀般理论(The General Theory of Employment Interest and Money)》作者:凯恩斯()分类号:K445.《经济发展理论(The Theory of Economic Development)》作者:熊彼特(Schumpeter)分类号:K446.《⼈类⾏为(Human Action: A Treatise on Economics)》, 作者:⽶塞斯(Mises)分类号:M6787.《经济分析的基础(Foundations of Economic Analysis)》作者:萨缪尔森(Samuelson)分类号:《货币数量理论研究(Studies in the Quantity Theory of Money)》作者:弗⾥德曼(Friedman)分类号:F8999.《集体选择与社会福利(Collective Choice and Social Welfare)》作者:阿玛蒂亚·森()分类号:F89910.《资本主义经济制度(The Economic Institutions of Capitalism)》作者:威廉姆森(Williamson)分类号:W729社会学:1.《论⾃杀(Suicide: A Study in Sociology)》作者:杜克海姆(Emilc Durkheim)分类号:D9472.《新教伦理与资本主义精神(The Protestant Ethic and the Spirit of Capitalism)》作者:韦伯(Max Weber)分类号:B920/W3753.《货币哲学(The Philosophy of Money)》作者:席美尔(Georg Simmel)分类号:C91-03/S5924.《⼀般社会学论集(A Treatise on General Sociology)》, 作者:帕累托(Vilfredo Pareto)分类号:C91-06/P227 5.《意识形态与乌托邦(Ideology and Utopia)》作者:曼海姆()分类号: M281⼈类学:1.《⾦枝(The Golden Bough)》作者:弗雷泽(James )分类号:B1/F8482.《西太平洋上的航海者(Argonauts of the Western Pacific)》作者:马林诺夫斯基()分类号:M2153.《原始思维(The Savage Mind)》作者:列维-斯特劳斯(Claude Levi-Strauss)分类号:B80/L6644.《原始社会的结构和功能(Structure and Function in Primitive Society)》作者:拉迪克⾥夫-布郎(Brown)分类号:B80/L6645.《种族、语⾔、⽂化(Race, Language and Culture)》作者:鲍斯(Franz Boas)分类号:C95/B662⼼理学:1.《⼼理学原理(The Principles of Psychology)》, 作者:威廉·詹姆⼠(William James)分类号:B84/J272.《⽣理⼼理学原理(Principles of Physiological Psychology)》作者:冯特()分类号:B845/W9653.《梦的解析(The Interpretation of Dreams)》作者:弗洛伊德(Sigmund Freud)分类号:B84-065/F8894.《⼉童智慧的起源(The Origin of Intelligence in the Child)》作者:⽪亚杰(Jean Piaget)分类号:P5795.《科学与⼈类⾏为(Science and Human behavior)》作者:斯⾦纳()分类号:B84-063/S628 6.《原型与集体⽆意识(The Archetypes and the Collective Unconscious)》作者:荣格()分类号:B84-065/J957.《动机与⼈格(Motivation and Personality)》作者:马斯洛()分类号:B84-067/M394法学:1.《古代法(Ancient Law)》作者:梅因()分类号:M2252.《英国法与⽂艺复兴(English Law and the Renaissance)》作者:梅特兰()分类号:M2253.《法理学讲演录(Lectures on Jurisprudence)》, 作者:奥斯丁()分类号:D90/A936 4.《法律的社会学理论(A Sociological Theory of Law)》作者:卢曼()分类号:D90-052/L9265.《法律社会学之基本原理(Fundamental Principles of the Sociology of Law)》作者:埃利希()分类号:D90-052/E33 6.《法律、宪法与⾃由(Law, Legislation and Liberty)》作者:哈耶克()分类号:7.《纯粹法学理论(Pure Theory of Law)》作者:凯尔森()分类号:D90/K298.《法律之概念(The Concept of Law)》作者:哈特()分类号:D90/K299.《法律之帝国(Law’s Empire)》作者:德沃⾦()分类号:D90/D98910.《法律的经济学分析(Economic Analysis of Law)》作者:波斯纳(Richard )分类号:D90-059/P855历史学:1.《历史(The Histories)》作者:希罗多德(Herodotus)分类号:K125/H5592.《伯罗奔尼撒战争史(The Peloponnesian War)》作者:修昔底德(Thucydides)分类号:K125/T5323.《编年史(The Annals of Imperial Rome)》作者:塔西陀(Tacitus)分类号:K126/T118 4.《上帝之城(The City of God)》, 作者:圣·奥古斯丁()分类号:B972/A923 5.《历史学:理论和实践(History: its Theory and Practice)》作者:克罗齐(Benedetto Croce)分类号:K01/C9376.《历史的观念(The Idea of History)》作者:柯林伍德()分类号:K01/C9377.《腓⼒普⼆世时代的地中海与地中海世界(The Mediterranean and the Mediterranean World in the Age of Philip II)》, 作者:布罗代尔()分类号:K503/B8258.《历史研究(A Study of History)》, 作者:汤因⽐()分类号:K01/T756商业经典书⽬:In Search of Excellence: Lessons from America's Best-Run Companies《追求卓越》:美国优秀企业的成功秘诀Built to Last: Successful Habits of Visionary Companies《基业长青》/《公司长寿秘诀》:⾼瞻远瞩公司长⽣不⽼的秘诀Reengineering the Corporation: A Manifesto for Business Revolution《公司再造》/《企业重组》:企业管理⾰命的宣⾔Barbarians at the Gate: The Fall of RJR Nabisco《⼤收购》/《门⼝的野蛮⼈》:华尔街股市兼并风潮Competitive Advantage: Creating and Sustaining Superior Performance《竞争优势》:寻找成功的⽀点The Tipping Point: How Little Things Can Make a Big Difference《引爆流⾏》:改变思维的佳作Crossing the Chasm: Marketing and Selling Technology Products to Mainstream Customers《跨越鸿沟》:⾼科技创新成功之道The House of Morgan《摩根财团》:美国⼀代银⾏王朝和现代⾦融业的崛起The Six Sigma Way《6σ管理法》:追求卓越的阶梯Seven Habits of Highly Effective People: Powerful Lessons in Personal Change 《强⼈的七种习性》:让你成为新强⼈Liar's Poker《说谎者的牌术》/《骗⼦游戏》:⼀幅扭曲的罪恶图景The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail《创新者的窘境》:⼤公司⾯对突破性技术时引发的失败Japan Inc.《Japan Inc.》:漫画⽇本经济Den of Thieves《股市⼤盗》/《贼巢》:华尔街最⼤内幕交易案始末The Essential Drucker《德鲁克精华》:⼤师中的⼤师精华中的精华Competing for the Future《竞争⼤未来》The Buffett Way: Investment Strategies of the World's Greatest Investor《沃伦?巴菲特之路》/《快餐式投资》:投资之王的理念与策略Jack: Straight from the Gut《杰克?韦尔奇⾃传》:⼀部CEO的圣经Good to Great: Why Some Companies Make the Leap... and Others Don't 《从优秀到卓越》:迈向成功的巅峰The New New Thing: A Silicon Valley Story《新新事物:硅⾕的故事》经济学经典书⽬:第1部《经济表》弗朗斯⽡·魁奈(法国1694—1774)第2部《国富论》亚当·斯密(英国1723—1790)第3部《⼈⼝原理》托马斯·罗伯特·马尔萨斯(英国1766—1834)第4部《政治经济学概论》让·巴蒂斯特·萨伊(法国1767—1832)第5部《政治经济学及赋税原理》⼤卫·李嘉图(英国1772—1823)第6部《政治经济学新原理》西蒙·德·西斯蒙第(法国1773—1842)第7部《政治经济学的国民体系》弗⾥德利希·李斯特(德国1789—1846)第8部《政治经济学原理》约翰·斯图亚特·穆勒(英国1806—1873)第9部《资本论》卡尔·马克思(德国1818—1883)第10部《政治经济学理论》威廉·斯坦利·杰⽂斯(英国1835—1882)第11部《国民经济学原理》卡尔·门格尔(奥地利1840—1921)第12部《纯粹政治经济学纲要》⾥昂·⽡尔拉斯(法国1834—1910)第13部《资本与利息》欧根·冯·庞巴维克(奥地利185l⼀1914)第14部《经济学原理》阿弗⾥德·马歇尔(英国1842—1924)第15部《利息与价格》克努特·维克塞尔(瑞典1851—1926)第16部《财富的分配》约翰·贝茨·克拉克(美国1847—1938)第17部《有闲阶级论》托尔斯坦·本德·凡勃伦(美国1857—1929)第18部《经济发展理论》约瑟夫·阿罗斯·熊彼特(奥地利1883—1950)第19部《福利经济学》阿瑟·赛西尔·庇古(英国1877—1959)第20部《不完全竞争经济学》琼·罗宾逊(英国1903—1983)第21部《就业、利息和货币通论》约翰·梅纳德·凯恩斯(英国1883—1946)第22部《价值与资本》约翰·理查德·希克斯(英国1904—1989)第23部《通往奴役之路》哈耶克(奥地利1899—1992)第24部《经济学》保罗·萨缪尔森(美国1915⼀)第25部《丰裕社会》约翰·肯尼斯·加尔布雷斯(美国1908—)第26部《经济成长的阶段》沃尔特·罗斯托(美国1916—)第27部《⼈⼒资本投资》西奥多·威廉·舒尔茨(美国1902—1998)第28部《资本上义与⾃由》⽶尔顿·弗⾥德曼(美国1912—)第29部《经济学》约瑟夫·斯蒂格利茨(美国1943—)第30部《经济学原理》格⾥⾼利·曼昆(美国1958—)第31部《商道》谋略经典:第1部《道德经》第2部《⿁⾕⼦》第3部《管⼦》第4部《论语》第5部《孙⼦兵法》第6部《荀⼦》第7部《韩⾮⼦》第8部《战国策》第9部《⼈物志》第10部《贞观政要》第11部《反经》第12部《资治通鉴》第13部《三国演义》第14部《菜根谭》第15部《智囊》第16部《三⼗六计》第17部《曾国藩家书》第18部《厚⿊学》第19部《君主论》第20部《战争论》管理类经典:第1部《科学管理原理》弗雷德⾥克·温斯洛·泰罗(美国1856—1915) 第2部《社会组织和经济组织理论》马克思·韦伯(德国1864—1920) 第3部《经理⼈员的职能》切斯特·巴纳德(美国1886—1961) 第4部《⼯业管理和⼀般管理》亨利·法约尔(法国1841-1925) 第5部《⼯业⽂明的社会问题》埃尔顿·梅奥(美国1880—1949) 第6部《企业中⼈的⽅⾯》道格拉斯·麦格雷⼽(美国1906—1964) 第7部《个性与组织》克⾥斯·阿吉⾥斯(美国1923—) 第8部《如何选样领导模式》罗伯特·坦南鲍姆(美国1915—2003) 第9部《管理决策新科学》赫伯特·西蒙(美国1916—2001) 第10部《伟⼤的组织者》欧内斯特·戴尔(美国1914—) 第11部《管理的新模式》伦西斯·利克特(美国1903—1981) 第12部《营销管理》菲利普·科特勒(美国1931—) 第13部《让⼯作适合管理者》弗雷德·菲德勒(美国1922—) 第14部《组织效能评价标准》斯坦利·E·西肖尔(美国1915—1999) 第15部《再论如何激励员⼯》弗雷德⾥克·赫茨伯格(美国1923—2000) 第16部《组织与管理系统⽅法与权变⽅法》弗⾥蒙特·卡斯特(美国1924—) 第17部《经理⼯作的性质》亨利·明茨伯格(加拿⼤1939—) 第18部《管理任务、责任、实践》彼得·杜拉克(美国1909—) 第19部《再论管理理论的丛林》哈罗德·孔茨(美国1908—1984) 第20部《杰克·韦尔奇⾃传》杰克·韦尔奇(美国1935—) 第21部《竞争战略》迈克尔·波特(美国1947—) 第22部《Z理论》威廉·⼤内(美国1943—) 第23部《转危为安》爱德华兹·戴明(美国1900—1993) 第24部《总经理》约翰·科特(美国1947—) 第25部《追求卓越》托马斯·彼得斯(美国1942—) 第26部《领导者成功谋略》沃伦·本尼斯(美国1925—) 第27部《巨⼈学舞》罗莎贝丝·摩丝·坎特(美国1943—) 第28部《第五项修炼》彼得·圣吉(美国1947—) 第29部《企业再造》迈克尔·汉默(美国1948—) 第30部《基业长青》詹姆斯·柯林斯(美国1958—) 第31部《杜拉克论管理》第32部《⾼效能⼈⼠的七个习惯》。
卢卡诺伯爵Pró1ogoEn el nombre de Dios.Amén.*Entre las maravillosas cosas que Dios ha hecho,está la cara del hombre.No hay dos caras similares en el mundo.Asimismo,º no hay dos hombres que tengan la misma voluntadº o inclinación.Sin embargo,hay una cosa en la que los hombres sí son similares,y es que aprenden mejor lo que más les interesa.De esta manera, el que quiera enseñar a otro alguna cosa deberá presentarla de una manera que le sea agradable para el que la aprende.Por esto yo,don Juan,hijo del principe don Manuel,escribí este libro con las palabras más hermosas que pude.Esto hice, siguiendo el ejemplo de los médicos,quienes ponen dulceº a las medicinas para que el dulzor arrastreºconsigo la medicina que beneficia.Así,el lector se deleitaráconºsus enseñanzas,y aunque no quiera,aprenderá su instrucción.Si por acasoºlos lectores encuentran algo mal expresado, no echen la culpaºsino a la falta de entendimiento de don Juan Manuel;y si por el contrario encuentran algo provechoso,agradézcanleº a Dios,el inspirador de los buenos dichos y las buenas obras.Y pues,ya terminado el pró1ogo,de aquí en adelanteº comienzan los cuentos.Hay que suponer que un gran sefior,el conde Lucanor,habla con Patronio,su conseiero.ºasimismo同样地,照样地voluntad意愿,意志ponen dulce加糖arrastre带来se deleitará con高兴;喜爱:取乐por acaso偶然,偶尔echen la culpa责备,指责agradézcanle感谢de...adelante从这里往后su consejero他的顾问En el nombre de Dios.Amén:以上帝的名义,阿门。
python起源与发展Python是一种高级编程语言,由Guido van Rossum于1989年在荷兰创造。
Python的设计目标是易于阅读、易于学习和易于维护。
Python的语法和结构简单,使得它成为初学者和专业程序员的首选语言之一。
Python的发展历程可以分为三个阶段:早期的Python 1.x版本,中期的Python 2.x版本和现在的Python 3.x版本。
早期的Python 1.x版本是在1991年发布的,它是Guido van Rossum在荷兰的CWI(Centrum Wiskunde & Informatica)研究所开发的。
Python 1.x版本的特点是简单、易于学习和使用,但是它的性能和功能比较有限。
Python 1.x版本的主要应用领域是科学计算、数据分析和Web开发。
中期的Python 2.x版本是在2000年发布的,它是Python的一个重要里程碑。
Python 2.x版本的特点是更加稳定、更加强大和更加灵活。
Python 2.x版本的主要应用领域是Web开发、网络编程、科学计算和数据分析。
Python 2.x版本的最后一个版本是Python 2.7,它于2010年发布,目前仍然被广泛使用。
现在的Python 3.x版本是在2008年发布的,它是Python的一个重要升级。
Python 3.x版本的特点是更加简洁、更加优雅和更加高效。
Python 3.x版本的主要应用领域是Web开发、数据分析、人工智能和机器学习。
Python 3.x版本的最新版本是Python 3.9,它于2020年10月发布。
Python的发展受到了许多因素的影响,其中最重要的因素是开源社区的支持。
Python的开源社区是一个庞大的社区,由全球各地的程序员组成。
这个社区不仅为Python的发展提供了技术支持和贡献,还为Python的应用提供了丰富的资源和工具。
Python的开源社区是Python成功的关键之一。
逢坂大河英语写法Onodera Oogawa is a captivating figure whose life story has captured the imagination of many. Born into a family with a rich cultural heritage, Onodera's journey has been one of resilience, determination, and a unwavering pursuit of personal growth.From a young age, Onodera demonstrated a keen intellect and a thirst for knowledge that set them apart from their peers. Encouraged by their supportive family, Onodera immersed themselves in a wide range of academic disciplines, excelling in subjects such as literature, history, and philosophy. This insatiable curiosity would later shape the course of their life, leading them down a path of self-discovery and a deep appreciation for the complexities of the human experience.As Onodera came of age, they found themselves drawn to the world of creative expression. Inspired by the works of literary giants and the timeless wisdom of ancient philosophers, Onodera began to hone their own writing skills, crafting poetic verses and insightful essays that captivated all who encountered them. This passion forthe written word would become a defining aspect of their identity, a means through which they could share their unique perspective on the human condition.Onodera's intellectual prowess and artistic talents did not go unnoticed, and they soon found themselves at the forefront of a burgeoning literary movement. Their works were praised for their depth of insight, their lyrical beauty, and their ability to challenge the status quo. Onodera's name became synonymous with a new generation of thinkers and creatives who were redefining the boundaries of what was possible in the realm of written expression.But Onodera's journey was not without its challenges. As they navigated the complexities of the literary world, they also grappled with the personal demons that lurked within. Onodera's introspective nature and their deep sensitivity to the world around them often led them to confront the darker aspects of the human experience – the pain, the sorrow, and the existential questions that plague the human condition.Yet, it was precisely these struggles that fueled Onodera's creative fire, inspiring them to delve deeper into the human psyche and to explore the universal truths that bind us all together. Their writing became a conduit for their own personal growth, a means of processing the complexities of life and finding solace in the power oflanguage.As Onodera's reputation grew, so too did their influence. They became a beacon of inspiration for aspiring writers and thinkers, their words resonating with audiences across the globe. Their works were translated into multiple languages, and their name became synonymous with a new era of literary excellence.But Onodera's impact extended far beyond the realm of writing. They were also a passionate advocate for social justice, using their platform to shed light on the struggles of marginalized communities and to champion the causes of equality and human rights. Their unwavering commitment to these principles earned them the respect and admiration of countless individuals who saw in Onodera a kindred spirit – a visionary who was not afraid to challenge the status quo and to fight for a more just and equitable world.Throughout their life, Onodera continued to push the boundaries of what was possible, constantly reinventing themselves and exploring new avenues of creative expression. Whether they were penning a poetic masterpiece or delivering a rousing speech, Onodera's words always carried a profound weight, a sense of purpose that transcended the confines of the page or the stage.And as the years passed, Onodera's legacy only grew stronger. Theirworks were studied in classrooms around the world, inspiring new generations of thinkers and writers to follow in their footsteps. Their name became synonymous with a new era of literary excellence, a testament to the power of the written word to transform lives and to shape the course of human history.Today, Onodera Oogawa's legacy lives on, a shining example of what can be achieved through a relentless pursuit of knowledge, a deep commitment to social justice, and an unwavering belief in the transformative power of the written word. Their story is one of inspiration, of resilience, and of the enduring human spirit – a testament to the remarkable potential that lies within each and every one of us.。
Europe,with its rich and diverse history,has been the cradle of many significant events and developments that have shaped the world as we know it today.Here are some key characteristics of European history that are worth noting:1.Ancient Civilizations:Europes history is marked by the rise and fall of various ancient civilizations,such as the Greeks and Romans,who laid the foundations for Western philosophy,science,and political systems.2.Feudalism:During the Middle Ages,Europe was predominantly feudal,with a social hierarchy that included kings,nobles,knights,and serfs.This system was based on land ownership and the exchange of protection for labor.3.The Renaissance:The period from the14th to the17th century marked a cultural rebirth in Europe,known as the Renaissance.This era saw a revival in art,science,and learning,with figures like Leonardo da Vinci and Michelangelo making significant contributions.4.The Age of Exploration:Europeans began to explore the world in search of new trade routes and territories,leading to the discovery of the Americas by Christopher Columbus and the establishment of colonies around the globe.5.Religious Reformation:The16th century was a time of religious upheaval,with the Protestant Reformation challenging the authority of the Catholic Church and leading to the formation of various Protestant denominations.6.The Enlightenment:The18th century was characterized by the Enlightenment,an intellectual movement that emphasized reason,individualism,and skepticism of traditional authority,which influenced the development of modern democracy and human rights.7.Industrial Revolution:Europe was the birthplace of the Industrial Revolution in the 18th and19th centuries,which transformed economies and societies through the introduction of machinery,mass production,and urbanization.8.Napoleonic Wars:The period of the Napoleonic Wars18031815saw France,under the leadership of Napoleon Bonaparte,seeking to dominate Europe,leading to a series of conflicts that reshaped the continents political landscape.9.World Wars:The20th century was marked by two devastating global conflicts,World War I and World War II,both of which originated in Europe and had profound impactson the world order.10.European Union:In the aftermath of World War II,Europe sought to prevent future conflicts by promoting cooperation and integration among its nations.The European Union,established in the late20th century,is a political and economic union that aims to foster peace and stability.11.Cultural Diversity:Europe is known for its cultural diversity,with each country and region having its own languages,traditions,and customs,which have contributed to a rich tapestry of European identity.12.Scientific and Technological Advancements:Europe has been at the forefront of many scientific and technological advancements,from the development of the printing press to the discovery of the structure of DNA.13.Colonialism and Imperialism:European powers played a significant role in the colonization of other parts of the world,which had lasting effects on global politics and economies.14.The Cold War:The division of Europe into Eastern and Western blocs during the Cold War19471991was a defining feature of the continents20thcentury history,with the Iron Curtain symbolizing the ideological and physical separation.15.PostWar Recovery and Integration:Europes postWorld War II recovery and the process of integration have been remarkable,with the continent becoming a leader in social welfare,environmental protection,and human rights.These characteristics only scratch the surface of Europes complex and multifaceted history,which continues to evolve and influence the world in myriad ways.。
Observation is a fundamental skill across various disciplines,from science to social studies,and it plays a crucial role in the learning process. In this essay,I will delve into the significance of observation,its application in different fields,and the impact it has on our understanding of the world.Observation is the act of carefully watching and noting the details of a situation,person,or event.It is more than just looking it involves a deeper level of engagement where one must be attentive to the subtleties and nuances of what is being observed.This skill is essential for gathering information and making informed decisions.In the field of natural sciences,observation is the cornerstone of discovery. For instance,Charles Darwins meticulous observations of the Galapagos finches led to his theory of natural selection.Similarly,astronomers rely on observation to study celestial bodies and phenomena,which has led to the understanding of the universes vastness and complexity.In social sciences,observation is vital for understanding human behavior and societal patterns.Sociologists and anthropologists often immerse themselves in different cultures to observe and document social norms, rituals,and interactions.This approach helps them to draw conclusions about the structure and dynamics of societies.Moreover,observation is an indispensable tool in the field of education. Teachers use observation to assess students understanding,engagement, and learning styles.This allows them to tailor their teaching methods to meet the individual needs of their students,fostering a more inclusive andeffective learning environment.Observation also plays a significant role in the arts.Artists observe the world around them to capture its essence in their work.Whether its a painter capturing the play of light on a landscape or a writer observing human emotions to create a compelling narrative,the act of observation is central to their creative process.In addition to its practical applications,observation also enriches our personal lives.It enhances our appreciation of the worlds beauty,from the intricate details of a butterflys wing to the aweinspiring sight of a starry night sky.By observing,we become more attuned to the world around us and develop a deeper connection with it.However,effective observation requires practice and skill.One must learn to be patient,as observations can take time to yield meaningful insights.It also requires an open mind,as preconceived notions can cloud ones perception and lead to inaccurate conclusions.In conclusion,observation is a multifaceted skill that is crucial in various aspects of life.It is the foundation of knowledge and understanding, enabling us to explore the world around us and make informed decisions. By cultivating the habit of observation,we can deepen our appreciation of the world and enrich our lives.。
A Survey of Software RefactoringTom Mens,Tom Tourw´eAbstract—This paper provides an extensive overview of existing research in thefield of software refactoring.This re-search is compared and discussed based on a number of dif-ferent criteria:the refactoring activities that are supported; the specific techniques and formalisms that are used for sup-porting these activities;the types of software artifacts that are being refactored;the important issues that need to be taken into account when building refactoring tool support; and the effect of refactoring on the software process.A run-ning example is used throughout the paper to explain and illustrate the main concepts.Keywords—D.2.3Coding Tools and Techniques;D.2.6Programming Environments/Construction Tools;D.2.7.m Restructuring,reverse engineering,and reengi-neeringI.IntroductionA N intrinsic property of software in a real-world envi-ronment is its need to evolve.As the software is en-hanced,modified and adapted to new requirements,the code becomes more complex and drifts away from its orig-inal design,thereby lowering the quality of the software. Because of this,the major part of the total software de-velopment cost is devoted to software maintenance[1],[2], [3].Better software development methods and tools do not solve this problem,because their increased capacity is used to implement more new requirements within the same time frame[4],making the software more complex again.To cope with this spiral of complexity there is an urgent need for techniques that reduce software complexity by in-crementally improving the internal software quality.The research domain that addresses this problem is referred to as restructuring[5],[79]or,in the specific case of object-oriented software development,refactoring[6],[7]. According to the taxonomy of Chikofsky and Cross[8], restructuring is defined as“the transformation from one representation form to another at the same relative abstrac-tion level,while preserving the subject system’s external behaviour(functionality and semantics).A restructuring transformation is often one of appearance,such as alter-ing code to improve its structure in the traditional sense of structured design.While restructuring creates new versions that implement or propose change to the subject system,it does not normally involve modifications because of new re-quirements.However,it may lead to better observations of the subject system that suggest changes that would improve aspects of the system.”The term refactoring was originally introduced by William Opdyke in his PhD dissertation[6].Refactoring is basically the object-oriented variant of restructuring:“the Tom Mens(tom.mens@umh.ac.be),Universit´e de Mons-Hainaut, Avenue du Champ de Mars6,B7000Mons,Belgium.Tom Tourw´e(tom.tourwe@cwi.nl),Centrum voor Wiskunde en Informatica,P.O.Box94079,NL1090GB Amsterdam,The Nether-lands.process of changing a[object-oriented]software system in such a way that it does not alter the external behaviour of the code,yet improves its internal structure”[7].The key idea here is to redistribute classes,variables and methods across the class hierarchy in order to facilitate future adap-tations and extensions.In the context of software evolution,restructuring and refactoring are used to improve the quality of the soft-ware(e.g.,extensibility,modularity,reusability,complex-ity,maintainability,efficiency).Refactoring and restruc-turing are also used in the context of reengineering[9], which is the examination and alteration of a subject sys-tem to reconstitute it in a new form and the subsequent implementation of the new form[8].In this context,re-structuring is needed to convert legacy code or deteriorated code into a more modular or structured form[10],or even to migrate code to a different programming language or even language paradigm[11].The remainder of this paper is structured as follows.Sec-tion II explains general ideas of refactoring by means of an illustrative example.Section III identifies and explains the different refactoring activities.Section IV provides an overview of various formalisms and techniques that can be used to support these refactoring activities.Section V sum-marises different types of software artifacts for which refac-toring support has been provided.Section VI discusses essential issues that have to be considered in developing refactoring tools.Section VII discusses how refactoring fits in the software development process.Finally,Section VIII concludes.II.Running exampleIn this section we introduce a running example that will be used throughout the paper.The example illustrates a typical non-trivial refactoring of an object-oriented de-sign.The initial design depicted in Figure1represents an object-oriented class hierarchy.It shows a Document class that is refined into three specific subclasses ASCIIDoc,PS-Doc and PDFDoc.A document provides preview and print facilities,which are realised by invoking the appropriate methods in the associated Previewer and Printer classes, respectively.Before these methods can be invoked,some preprocessing or conversion needs to be done,which is re-alized differently for each of the Document subclasses.In Figure1this is represented by the different code fragments A,B,C and X,Y,Z,respectively.This design is not optimal because different function-alities of the Document class are distributed over all the subclasses.In order to add a new functionality to the Doc-ument class,such as a text search or a spell checker,we need to change every subclass of Document and we need to define the appropriate helper classes.Moreover,manyFig.1.Document class hierarchy and helper classessuch evolutions increase the complexity and reduce the un-derstandability of the design,because the Document class has many associations and no explicit relationship between all helper classes exists,although their roles are similar. To overcome these problems,the design needs to be refactored.By introducing a so-called Visitor design pat-tern[12],the same functionality can be achieved in a more localised fashion,while at the same time the understand-ability of the design is improved.This is illustrated in Figure2.The idea is to introduce a Visitor class hierar-chy,that groups all helper classes and defines a common interface for them(the visit*methods).At the same time, a generic accept method is implemented in all classes of the Document hierarchy.The accept method in each subclass calls a method,specifically defined for that subclass,of the Visitor hierarchy interface.In this refactored design model,new functionality can be added by simply creating a new subclass of Visitor,and implementing the visit*methods appropriately.As can be seen in Figure2,the implementations of the print and pre-view methods previously in subclasses of Document(i.e., A,B,C,X,Y,Z)have been moved to the visit*meth-ods of the Printer and Previewer classes(i.e.,A’,B’,C’, X’,Y’,Z’).Although the above example is relatively simple,it al-ready requires over twenty primitive refactorings to intro-duce the Visitor design pattern:1.The print method in each Document subclass(3occur-rences)is moved to class Printer using a MoveMethod refactoring2.To avoid name conflicts,each of the3moved print meth-ods needs to be renamedfirst to a visit*method using a RenameMethod refactoring3.The preview method in each Document subclass (3occurrences)is moved to class Previewer using a MoveMethod refactoring4.To avoid name conflicts,each of the3moved preview methods needs to be renamedfirst to a visit*method usinga RenameMethod refactoring5.An abstract Visitor class is introduced as superclass for Printer and Previewer using an AddClass refactoring6.Three abstract visit*methods are introduced in the new Visitor class using an AddMethod refactoring7.An accept method is introduced in all three subclasses of Document by extracting it from the print method and preview methods,using an ExtractMethod refactoring 8.All preview and print methods now call the accept method with an instance of the appropriate Visitor sub-class.Therefore,their definition can be pulled up to the Document class,by using a PullUpMethod refactoring. The refactorings in the above list are referred to as primitive refactorings.They are elementary behaviour-preserving transformations that can be used as building blocks to create the so-called composite refactorings[6],[13].These composite refactorings are usually defined asa sequence of primitive refactorings,and reflect more com-plex behaviour-preserving transformations that are more meaningful to the user.For example,the six refactorings in steps1and2of the above enumeration can be combined into the single composite refactoring MoveMethodsTo-Visitor shown in Figure3.In a similar way,steps3and 4in the above enumeration can be combined into a single composite refactoring.posite refactoring for renaming and moving print meth-ods from the Document subclasses to the Printer classIII.Refactoring activitiesThe refactoring process consists of a number of distinct activities:1.Identify where the software should be refactored;2.Determine which refactoring(s)should be applied to the identified places;3.Guarantee that the applied refactoring preserves be-haviour;4.Apply the refactoring;5.Assess the effect of the refactoring on quality character-istics of the software(e.g.,complexity,understandability, maintainability)or the process(e.g.,productivity,cost,ef-fort);6.Maintain the consistency between the refactored pro-gram code and other software artifacts(such as documen-tation,design documents,requirements specifications,tests and so on)As will be illustrated below,each of these activities can be supported by different tools,techniques or formalisms.Fig.2.Refactored design model for the Document class hierarchyA.Identifying where to apply which refactoringsAfirst decision that needs to be made here is to deter-mine the appropriate level of abstraction to apply the refac-toring.Should the refactorings be applied to the program itself(i.e.,the source code),or to more abstract software artifacts such as design models or requirements documents, for example?1We will tackle this particular question in de-tail in Section V,and restrict ourselves to the subdomain of program refactoring here.In this subdomain,the activity of identifying the parts of the program that require refac-toring(activity1),and proposing refactorings that should be applied to these(activity2),are usually combined. Kataoka et al.implemented the Daikon tool to indicate where refactorings might be applicable by automatically detecting program invariants[14].One invariant may be that a certain parameter of a method is always constant, or is a function of the other parameters of a method.In that case,it might be possible to apply a removeParameter refactoring.The main problem with this approach is that it requires dynamic analysis of the runtime behaviour:the application needs to be executed to infer the program in-variants.To this extent,the tool uses a representative set of test suites.It is however impossible to guarantee that a test suite covers all possible runs of a program.Therefore, the invariants may not hold in general.Nonetheless,very good results have been obtained in practice.Moreover,the approach is complementary to other approaches that rely on static information.Probably the most widespread approach to detect pro-gram parts that require refactoring is the identification of bad smells.According to Kent Beck,bad smells are“struc-tures in the code that suggest(sometimes scream for)the possibility of refactoring”[7].As a concrete example of a bad smell,reconsider the Document class hierarchy design in Figure1of Section II.By analysing the code fragments A,B,C and X,Y,Z,respectively,it is very likely that one can detect a significant amount of code duplication.This 1As a terminological side note,when we use to term program in the remainder of this paper,we specifically refer to the source code or executable code.In contrast,when we use the term software,we refer to any type of software artifact(including code,design models, requirements specifications,and so on).is a typical example of a bad smell,since code duplication should be avoided,as it decreases maintainability.Balazin-ska et e a clone analysis tool to identify duplicated code that suggests candidates for refactoring[15].Ducasse et al.sketch an approach to detect duplicated code in soft-ware and propose refactorings that can eliminate this du-plication[16].The approach is based on an object-oriented meta model of the source code and a tool that is capable of detecting duplication in code.The proposed refactor-ings consist of removing duplicated methods,extracting duplicated code from within a method and inserting an intermediate subclass to factor out the common code. Martin Fowler informally links bad smells to refactor-ings[7].Tourw´e and Mens use a semi-automated approach based on logic meta programming to formally specify and detect these bad smells,and to propose refactoring oppor-tunities that remove these bad smells[17].A more ad hoc approach to detect structural weaknesses in object-oriented source code and solve them by refactorings is proposed by Dudziak and Wloka[19].van Emden and Moonen com-bine the detection of bad smells in Java with a visuali-sation mechanism[18].Simon et e object-oriented metrics to identify bad smells and propose adequate refac-torings[20].They focus on use relations to propose move method/attribute and extract/inline class refactorings.The key underlying concept is the distance-based cohesion met-ric,which measures the degree to which methods and vari-ables of a class belong together.Especially in combination with software visualisation,the use of object-oriented met-rics seems well suited to detect places in the source code that are in need of refactoring[20],[21].Afinal but important issue is that identification of which refactorings to apply can be highly dependent on the par-ticular application domain.If we restrict ourselves to,for example,web-based software the question of“where and why”to refactor is partially answered by the high-level refactorings from[22].B.Guaranteeing that the refactoring preserves software be-haviourBy definition,a refactoring should not alter the be-haviour of the software.Unfortunately,a precise definitionof behaviour is rarely provided,or may be inefficient to bechecked in practice.The original definition of behaviour preservation as sug-gested by Opdyke[6]states that,for the same set ofinput values,the resulting set of output values shouldbe the same before and after the refactoring.Opdykesuggests to ensure this particular notion of behaviourpreservation by specifying refactoring preconditions.Asa concrete example of such a refactoring precondition,reconsider the primitive refactorings in the running ex-ample of Section II.Thefirst refactoring suggestedis MoveMethod(print,ASCIIDoc,Printer).It has anumber of necessary preconditions:the classes ASCIIDocand Printer should be defined;the method print shouldbe implemented in ASCIIDoc;the method signature ofprint should not be present in class Printer.As can beseen in Figure1,the third precondition is not satisfied,which is precisely why the refactoring RenameMethod(print,ASCIIDoc,visitASCII)was suggested to avoidthe method signature conflict.In many application domains,requiring the preservationof input-output behaviour is insufficient,since many otheraspects of the behaviour may be relevant as well.This im-plies that we need a wider range of definitions of behaviourthat may or may not be preserved by a refactoring,de-pending on domain-specific or even user-specific concerns:•For real-time software,an essential aspect of the be-haviour is the execution time of certain(sequences of)op-erations.In other words,refactorings should preserve allkinds of temporal constraints;•For embedded software,memory constraints and power consumption are also important aspects of the behaviour that may need to be preserved by a refactoring;•For safety-critical software,there are concrete notions of safety(e.g.,liveness)that need to be preserved by a refactoring.One may deal with behaviour preservation in a very prag-matic way,for example by means of a rigorous testing disci-pline.If we have an extensive set of test cases,and all thesetests still pass after the refactoring,there is a good evidencethat the refactoring preserves the program behaviour.Un-fortunately,some refactorings will invalidate existing tests,even if the refactoring does no alter the behaviour[23],[24].The reason for this is that the tests may rely on theprogram structure that is modified by the refactoring.Another pragmatic,but slightly more formal,approachis to adopt a weaker notion of behaviour preservation thatis insufficient to formally guarantee the full preservationof program semantics,but that works well in many prac-tical situations.For example,we can define a notion ofcall preservation,which guarantees that all method callsare preserved by the refactoring[25].In the presence ofa type system,one can show that a refactoring preservestype correctness[26].A more fundamental approach is to formally prove that refactorings preserve the full program semantics.For a lan-guage with a simple and formally defined semantics,such as the logic programming language Prolog,one can prove that some refactorings that improve the efficiency actually preserve the program semantics[27].For more complex languages such as C++,where a formal semantics is ex-tremely difficult to define,we typically have to put restric-tions on the allowed language constructs or refactorings, and the applicability of a refactoring tool may be limited to a particular version of a particular compiler[28].C.Assessing the effect of refactoring on qualityFor any piece of software we can specify its external qual-ity attributes(such as robustness,extensibility,reusability, performance).Refactorings can be classified according to which of these quality attributes they affect.This allows us to improve the quality of software by applying the rele-vant refactorings at the right places.To achieve this,each refactoring has to be analysed according to its particular purpose and effect.Some refactorings remove code redun-dancy,some raise the level of abstraction,some enhance the reusability,and so on.This efffect can be estimated to a certain extent by expressing the refactorings in terms of the internal quality attributes they affect(such as size, complexity,coupling and cohesion).An important software quality characteristic that can be affected by refactoring is performance.It is a common misconception that improving the program structure has a negative effect on the program performance.In the context of logic and functional programs,restructuring transforma-tions typically have the goal to improve program perfor-mance while preserving the program semantics[27],[29]. In the context of object-oriented programs,Demeyer[30] investigated the effect of refactorings that replace condi-tional logic by polymorphism.He concludes that the pro-gram performance gets better after the refactoring,because of the efficient way in which current compiler technology optimises polymorphic methods.To measure or estimate the impact of a refactoring on quality characteristics,many different techniques can be used.Examples include,but are not limited to,software metrics,empirical measurements,controlled experiments and statistical techniques.Kataoka et al.propose cou-pling metrics as an evaluation method to determine the effect of refactoring on the maintainability of the program [31].Tahvildari et al.encode design decisions as soft-goal graphs to guide the application of the transformation pro-cess[32].These soft-goal graphs describe correlations be-tween quality attributes.The association of refactorings with a possible effect on soft-goals addresses maintainabil-ity enhancements through primitive and composite refac-torings.Tahvildari et e a catalogue of object-oriented metrics as indicator to detect automatically where a par-ticular refactoring can be applied to improve the software quality[33].This is achieved by analysing the impact of each refactoring on these object-oriented metrics.D.Maintaining consistency of refactored software Typically,software development involves a wide range of software artifacts such as requirements specifications, software architectures,design models,source code,docu-mentation,test suites,and so on.If we refactor any of these software artifacts,we need mechanisms to maintain their consistency.Since the activity of inconsistency man-agement is a research area in its own right[34],[35],[36], we will not treat it in detail here.We only discuss a few approaches that relate consistency maintenance to refac-toring.Bottoni et al.propose to maintain consistency between the program and design models by describing refactoring as coordinated graph transformation schemes[37].These schemes have to be instantiated according to the specific code modification and applied to the design models affected by the change.Within the same level of abstraction,there is also a need to maintain consistency.For example,if we want to refac-tor source code,we have to ensure that the corresponding unit tests are kept consistent[23].Similarly,if we have different kinds of UML design models,and any of these is being refactored,the others have to be kept consistent. Van Der Straeten et al.suggest to do this by means of logic rules[39].Rajlich uses the technique of change propagation to cope with inconsistencies between different software artifacts [38].This technique deals with the phenomenon that,when one part of a software is changed,dependent parts of the software may need to be changed as well.IV.Refactoring techniques and formalismsA wide variety of formalisms and techniques have been proposed and used to deal with one or more refactoring activities.We discuss two such techniques in detail:the use of assertions(preconditions,postconditions and invari-ants)and the use of graph transformation.Next,we discuss how formalisms can help us to guarantee program correct-ness and preservation in the context of refactoring.Finally, we provide an indicative,but inevitably incomplete,list of other useful techniques to support refactoring activities.A.Invariants,pre-and postconditionsA refactoring’s definition often includes invariants that should remain satisfied and pre-and postconditions that should hold before and after the refactoring has been applied.These constitute a lightweight and automati-cally verifiable means to ensure that(certain parts of) the behaviour of the software is preserved by the refac-toring.A concrete example of the use of preconditions was already presented for the refactoring MoveMethod (print,ASCIIDoc,Printer)in Section III-B.A set of postconditions for the same refactoring would be:(1)the print method must be implemented in Printer after the refactoring;(2)the method signature of print does not ex-ist in ASCIIDoc after the refactoring.An example of an invariant is the fact that classes ASCIIDoc and Printer are defined before and after the refactoring.The use of preconditions and invariants has been sug-gested repeatedly in research literature as a way to address the problem of behaviour preservation when restructuring or refactoring software artifacts.In the context of object-oriented database schemas(which are similar to UML class diagrams),Banerjee and Kim identified a set of invariants that preserve the behaviour of these schemas[40].Opdyke adopted this approach to object-oriented programs,and additionally provided preconditions or enabling conditions for each refactoring[6].He argued that these preconditions preserve the invariants.Roberts usedfirst order predicate calculus to specify these preconditions in a formal way[41]. The notion of preconditions or applicability conditions is also available in the formal restructuring approach of Ward and Bennett,using the formal language WSL[42]. Preconditions may vary depending on the complexity of the language studied.More complex languages typically re-quire more preconditions on the refactoring in order to pre-serve the invariants.Unfortunately,there are some prac-tical problems with preconditions.One problem is that the static checking of some preconditions may require very expensive analysis,or may even be impossible.Another problem is that the preconditions do not consider the size or structure of the program[6].For example,C++pro-grams may perform integer arithmetic with the address of a variable in a class,which is problematic if the refactoring changes the physical ordering of the variables in that class.A number of suggestions have been made to overcome the above problems with preconditions.Tip et al.suggest to use type constraints to efficiently verify preconditions that depend on interprocedural relationships between vari-able types[26].This is particularly useful for refactorings that are concerned with generalisation.Roberts suggests to augment refactorings with postconditions[41].These postconditions are particularly useful for those invariants that rely on dynamic information that is difficult to ex-press,or expensive to check statically,with preconditions. Postconditions can also be used to increase the efficiency of a refactoring tool.From a theoretical point of view it can be shown that a set of postconditions can be trans-lated into an equivalent set of preconditions[43].Roberts provided an algorithm to perform this translation for se-quences of program transformations.´O Cinn´e ide extended this algorithm to deal with iteration and conditional con-structs[13].B.Graph transformationTraditionally,refactorings are specified as parameterized program transformations along with a set of pre-and post-conditions that guarantee behavior preservation if satisfied [6],[44].If we adopt this view,there is a direct corre-spondence between refactorings and graph transformations. Programs(or other kinds of software artifacts)can be ex-pressed as graphs,refactorings correspond to graph pro-duction rules,the application of a refactoring corresponds to a graph transformation,refactoring pre-and postcondi-tions can be expressed as application pre-and postcondi-tions[43],[45].Table I summarises some formal properties of graph transformation that may be used to address im-portant issues in refactoring.TABLE ICorrespondence between refactoring and graphtransformationRefactoring Graph transformation software artifact graphrefactoring graph production composite refactoring composition of graph pro-ductionsrefactoring application graph transformation refactoring precondition application precondition refactoring postcondition application postcondition(in)dependence between refactorings in a sequence parallel or sequential (in)dependenceconflict between refactor-ings applied in parallel to the same software artifact confluency and critical pair analysisHence,it is not surprising that the theory of graph trans-formations has been used to provide more formal support for software refactoring.Mens et e the graph rewrit-ing formalism to prove that refactorings preserve certain kinds of relationships(updates,accesses and invocations) that can be inferred statically from the source code[25]. Bottoni et al.describe refactorings as coordinated graph transformation schemes in order to maintain consistency between a program and its design when any of them evolves by means of a refactoring[37].Heckel[43]uses graph trans-formations to formally prove the claim(and correspond-ing algorithm)of Roberts[41]that any set of refactoring postconditions can be translated into an equivalent set of preconditions.Van Eetvelde and Janssens[46]propose a hierarchical graph transformation approach to be able to view and manipulate the software and its refactorings at different levels of detail.The properties of sequential and parallel(in)dependence of graph transformations are also extremely suitable to rea-son about the dependence between refactorings.Two refac-torings are independent if they can be applied in any order, i.e.,the order in which they are applied does not affect the end result.This gives rise to a whole range of useful appli-cation scenarios.One scenario is the serialisation of refactorings that have been applied in parallel to the same software artifact[102]. During this serialisation process,it is possible that conflicts arise because the refactorings make incompatible changes. To detect and resolve such conflicts,one can rely on existing results about parallelism and confluence[110]and critical pair analysis[111].Analysis of sequential dependencies can also be used to reorder a given sequence of refactorings,for example,to normalise the sequence,to identify refactorings that an-nihilate each other’s effect,to regroup subsequences into predefined composite refactorings,and so on.When building composite refactorings it is useful to de-termine which refactorings have to be applied sequentially and which refactorings are mutually independent[41].For example,the composite refactoring shown in Figure3of Section II consists of a sequence of6primitive refactorings, but there are only3sequential dependencies(represented by straight arrows):each MoveMethod refactoring has to be preceded by a Rename refactoring.The order in which the three(Rename,MoveMethod)pairs have to be applied,however,is irrelevant.This is represented by dashed arrows.This means that,to increase the efficiency of the refactoring,one may decide to apply these3pairs of primitive refactorings in parallel.C.Formalisms for program correctness and preservation Formal approaches are needed to guarantee that certain program properties remain invariant to a program transfor-mation.We will make a distinction between the property of program correctness and the property of preservation.2 Program correctness is the property that a program will work without errors.The preservation property of a pro-gram transformation guarantees that(some aspect of)the program behaviour is preserved by the transformation. Obviously,any program transformation should preserve the syntactic rules(or well-formedness rules)of the pro-gramming language.After the transformation,the soft-ware should still be syntactically correct.This can be checked by using a scanner and a parser.The semantics of the program should also remain correct,i.e.,the program should not give rise to run-time errors.Unfortunately,the correctness property is in general undecidable.Gupta et al.showed that we cannot prove,for an arbitrary running program and an arbitrary update to it,that the update is valid in the sense that it will eventually result in a reach-able program state of the newly added program code[63]. Because of the undecidability of this property,we can only take a conservative approach.For example,if we only con-sider restructurings of the same algorithm(as opposed to changes to program functionality),a syntactic analysis of the old and new program code can identify program points that preserve update validity.The preservation property can either be checked stati-cally or dynamically.The checking of refactoring precon-ditions[6],[41]can be considered as a static approach. However,the preconditions that are expressed infirst-order predicate logic are only a conservative approximation,and hence rule out many legal refactorings.Mens et al.sug-gest other notions of behaviour preservation that can be checked statically and show how this can be realised using a graph transformation formalism[25].Access preserva-tion means that all variable accesses should be preserved by the refactoring.Update preservation means that all vari-able updates should be preserved by the refactoring.Call 2This distinction is not made in the domain of program transforma-tion for functional languages[29].In this domain,the term correct-ness is used to indicate that a program transformation preserves the extensional meaning of programs.We will not use correctness in this sense,because it leads to confusion with the more widely accepted definition of program correctness.。
Matlab小波种类一、引言小波变换是一种基于信号的时频分析方法,通过将信号表示为一组小波基函数的线性组合,可以对信号的时频特性进行精确分析。
Matlab是一种强大的数值计算和数据可视化软件,提供了多种小波种类和相关函数,方便用户进行小波分析研究和应用开发。
本文将介绍Matlab中常用的小波种类,包括Daubechies小波、Symlets小波、Coiflets小波、Haar小波等。
我们将逐个讨论每种小波的特点、使用方法和相关函数,帮助读者更好地理解和应用小波分析。
二、Daubechies小波Daubechies小波是最常用的小波种类之一,由Ingrid Daubechies于1988年提出。
它具有紧支撑、对称性和正交性的特点,适用于信号的精确表示和压缩。
Daubechies小波的主要特点如下:1.紧支撑:Daubechies小波具有有限的非零系数,因此它可以提供信号的紧支撑表示,对于时间和频率局部化特性更好。
2.对称性:Daubechies小波的低通和高通滤波器具有对称性,可以保持信号的平移不变性。
3.正交性:Daubechies小波是正交小波,对于信号的变换和重构过程,可以保持信号能量不变。
在Matlab中,可以使用wavename函数指定Daubechies小波的阶数和名称,例如db1表示Daubechies小波的阶数为1。
三、Symlets小波Symlets小波是Daubechies小波的变种,也具有紧支撑、对称性和正交性的特点。
Symlets小波在时间和频率局部化特性上更加平衡,适用于需要较好时间和频率分辨率的信号分析。
Symlets小波的主要特点如下:1.时间和频率局部化:Symlets小波在时间和频率上更加平衡,可以在需要兼顾时间和频率分辨率的场景中更好地适应。
2.对称性:Symlets小波也具有对称的滤波器,可以保持信号的平移不变性。
3.正交性:Symlets小波是正交小波,能够保持信号的能量不变。
沉默是金的英语作文Title: The Virtue of Silence。
Silence is a virtue often undervalued in today's world of constant noise and chatter. In a society where speaking up and being heard is encouraged, the significance of silence tends to be overshadowed. However, it is crucial to recognize that silence holds immense power and importance in various aspects of life.First and foremost, silence fosters introspection and self-reflection. In the absence of external distractions, one can delve deep into their thoughts and emotions, gaining valuable insights about oneself. It provides an opportunity for introspection, enabling individuals to understand their strengths, weaknesses, and aspirations more clearly. Through silent contemplation, people can confront their fears, resolve inner conflicts, and embark on a journey of self-discovery.Moreover, silence is a potent tool for effective communication. While speaking is essential for expressing thoughts and ideas, listening in silence is equally important. Active listening requires attentiveness and receptivity without interjecting or interrupting. By practicing silence during conversations, individuals can truly understand the perspectives of others, fostering empathy and mutual understanding. Furthermore, silence allows for moments of pause, enabling individuals to formulate thoughtful responses rather than reacting impulsively. In this way, silence enhances the quality of communication and promotes meaningful connections with others.In addition, silence cultivates a sense of peace and tranquility amidst the chaos of everyday life. In a fast-paced world filled with constant stimulation, finding moments of silence becomes increasingly challenging yet essential for maintaining mental well-being. Whether through meditation, nature walks, or simply sitting in quiet contemplation, embracing silence provides a refuge from the stresses and pressures of modern living. It allowsindividuals to recharge, center themselves, and find inner harmony amidst the external turmoil.Furthermore, silence can be a powerful form of protest and resistance. In situations where words may be inadequate or ineffective, silence speaks volumes. It can symbolize solidarity, defiance, or mourning in the face of injustice or oppression. Historically, movements such as Gandhi's nonviolent resistance and silent protests have demonstrated the potent force of silence in catalyzing social change and raising awareness about pressing issues. By refusing to engage in verbal discourse, individuals can draw attention to injustices and inspire collective action through the power of silent unity.However, it is essential to recognize that silence is not synonymous with passivity or indifference. While silence can be a source of strength and wisdom, there are times when speaking out is necessary to address wrongdoing or advocate for positive change. Silence should not be misconstrued as acquiescence or complacency in the face of injustice or oppression. Instead, it should be wieldedthoughtfully and purposefully, recognizing its potential to inspire reflection, empathy, and action.In conclusion, silence is far from being merely the absence of sound; it is a profound and multifaceted virtue that enriches our lives in countless ways. From fostering self-awareness and empathy to promoting effective communication and social change, silence holds immense power and significance. By embracing moments of silence in our lives, we can cultivate inner peace, deepen connections with others, and contribute to a more compassionate andjust society. As the saying goes, "Silence is golden"—a timeless reminder of the enduring value of quiet contemplation and mindful listening in a world filled with noise.。
Guangzhou, the thirdlargest city in China, is a vibrant metropolis rich in history and culture. It offers a plethora of attractions for visitors to explore and enjoy. Here is an introduction to some of the most famous sites in Guangzhou.1. The Canton Fair Complex China Import and Export Fair Complex: Known as the Canton Fair, this is one of the largest trade fairs in China, attracting numerous international buyers and exhibitors. Its a testament to Guangzhous significance in international trade.2. The Pearl River Zhujiang: The lifeblood of the city, the Pearl River is a mustsee for its stunning skyline and vibrant night views. A cruise along the river offers a unique perspective of the citys landmarks.3. The Shamian Island: A historical enclave with Europeanstyle architecture, Shamian Island is a peaceful retreat from the bustling city. Its a great place to stroll and appreciate the colonialera buildings.4. Chimelong Safari Park: For wildlife enthusiasts, Chimelong Safari Park is a top destination. Its home to a vast array of animals and offers close encounters with some of the worlds most fascinating creatures.5. Baiyun Mountain White Cloud Mountain: A natural beauty spot, Baiyun Mountain offers breathtaking views of the city and is a popular spot for hiking and picnicking.6. The Temple of the Six Banyan Trees Liu Rong Si: This ancient Buddhist temple is famous for its pagoda, the Flower Pagoda, which is one of the symbols of Guangzhou.7. The Chen Clan Ancestral Hall Chen Ancestral Hall Museum: A magnificent example of traditional岭南architecture, the Chen Clan Ancestral Hall showcases intricate wood carvings and ceramic sculptures.8. Shangxiajiu Pedestrian Street: For shopping and food lovers, this bustling street is the place to be. Its lined with shops, markets, and street food vendors offering a taste of local Guangzhou cuisine.9. The Canton Tower Guangzhou Tower: An iconic structure of Guangzhou, the Canton Tower offers an observation deck with panoramic views of the city and is a symbol of modern Guangzhou.10. Huaisheng Mosque Light of the Six Banyan Trees Mosque: The oldest mosque inGuangzhou, its an important site for the Muslim community and a fine example of Islamic architecture.These are just a few of the many attractions that Guangzhou has to offer. Whether youre interested in history, culture, nature, or modern urban life, theres something for everyone in this dynamic city.。
Building documentation generators A. van Deursen, T. Kuipers Software Engineering (SEN)SEN-R9916 June 30, 1999Report SEN-R9916ISSN 1386-369XCWIP.O. Box 940791090 GB AmsterdamThe NetherlandsCWI is the National Research Institute for Mathematics and Computer Science. CWI is part of the Stichting Mathematisch Centrum (SMC), the Dutch foundation for promotion of mathematics and computer scienceand their applications.SMC is sponsored by the Netherlands Organization for Scientific Research (NWO). CWI is a member of ERCIM, the European Research Consortium for Informatics and Mathematics.Copyright © Stichting Mathematisch Centrum P.O. Box 94079, 1090 GB Amsterdam (NL) Kruislaan 413, 1098 SJ Amsterdam (NL)Telephone +31 20 592 9333Telefax +31 20 592 4199Building Documentation GeneratorsArie van Deursen and Tobias Kuipershttp://www.cwi.nl/∼{arie,kuipers}/{arie,kuipers}@cwi.nlCWIP.O.Box94079,1090GB Amsterdam,The NetherlandsABSTRACTIn order to maintain the consistency between sources and documentation,while at the same time providing doc-umentation at the design level,it is necessary to generate documentation from sources in such a way that it canbe integrated with hand-written documentation.In order to simplify the construction of documentation genera-tors,we introduce island grammars,which only define those syntactic structures needed for(re)documentationpurposes.We explain how they can be used to obtain various forms of documentation,such as data dependencydiagrams for mainframe batch jobs.Moreover,we discuss how the derived information can be made availablevia a hypertext structure.We conclude with an industrial case study in which a600,000LOC COBOL legacysystem is redocumented using the techniques presented in the paper.1991ACM Computing Classification System:D.2.2,D.2.5,D.2.7,D.3.4Keywords and Phrases:Redocumentation,legacy systems,documentation generation,source code analysis,island grammarsNote:Work carried out under projects SEN1.1,Software Renovation,and SEN1.5,Domain Specific Languages.This report will appear in the Proceedings of the International Conference on Software Maintenance1999(ICSM’99).1.I NTRODUCTIONThe documentation of a system is needed to understand that system at a certain level of abstraction,in a limited amount of time.It is needed,for instance,if a system is migrated or re-engineered.It can be used to map functional modification requests as expressed by end users onto technical modification requests,and to estimate the cost of such modifications.Finally,documentation will help in the process of outsourcing maintenance or when engineers that are new to the system need to learn about the system.The source code of a system can be viewed as its most detailed level of documentation:All information is there,but usually we do not have enough time to comprehend all the details.Luckily,we do not usually need to know all the details.Instead,we would like to have enough information so that we can build a mental model of the system,and zoom in to the specific details we are interested in.The level of detail(or abstraction)we are interested in depends very much on what we intend to do with the system.Thisflexibility should be reflected in the documentation,which,therefore,should adhere to four criteria:1.Documentation should be available on different levels of abstraction.2.Documentation users must be able to move smoothly from one level of abstraction to another,withoutloosing their position in the documentation(zooming in or zooming out).3.The different levels of abstraction must be meaningful for the intended documentation users.4.The documentation needs to be consistent with the source code at all times.Unfortunately,these criteria are not without problems.Criterion4implies that documentation is generated from the source code.In practice this is seldomly done.Consequently,it is violated by many legacy systems, which are modified continuously without updating the accompanying technical documentation.2Figure1:Deriving documentation from legacy sources.Solid lines indicate automatic processing,augmented with manually derived information indicated by dashed lines.Criterion3makes documentation generation hard.Meaningful abstractions can benefit immensely from design information which is usually not present in the source code itself.Such information needs to be added manually to the documentation.For new systems,mechanisms like literate programming[14]provide systematic ways of putting design information in the source code.For legacy systems this would involve a significant manual updating of program comments.Besides,design information is more often than not lost for legacy systems.In this paper,we study ways in which we can update the documentation of legacy systems such that all four criteria are met.We propose a combination of manual and automatic(re)documentation.Whatever documen-tation can be generated from the sources is derived automatically.This then is combined with information provided by hand.Depending on the state of the system,and the knowledge about the system,either one of those activities can play the predominant role in thefinal documentation that is delivered.Figure1shows the architecture of the documentation generators that are built this way.The remainder of this paper is organized as follows.In the next section,we introduce island grammars,the technology we use for extracting facts from a system’s source code.In Section3we discuss what information should be contained in documentation,and how we can derive it from the legacy sources.In Section4we ex-plain how the information extracted can be presented at what level of abstraction,using graph visualization and hypertext as primary tools.In Section5we describe a real-world Cobol legacy system,what its documentation problems were,and how we applied the techniques described in this paper to build a documentation generator for that system.We end the paper with related work,a summary of the main contributions,and suggestions for future work.2.S OURCE C ODE A NALYSISIn order to generate documentation from a system,we need to analyze the source code of that system.We have tried several analysis approaches.In this section we will discuss these approaches in detail.In later sections we discuss how we have used these approaches.2.1Lexical analysisWhen generating documentation for a system,only a few constructs in the source code are of interest.After all,the documentation should be a useful abstraction of the system.The constructs of a language that are of interest very much depend on the type of documentation that should be generated.If these constructs have an easily recognizable lexical form,lexical analysis is an efficient way tofind them.If,for instance,we are looking forfiles that are opened for reading in a Cobol source,we simply look for the string“open input”and3 take the word directly following that string as thefile handle that has been opened.The advantage of this approach is that we do not need to know the full syntax of the language we want toanalyze.Another advantage is that lexical analysis is very efficient.This allows us to analyze large numbers of files in a short time,and also allows us to experiment with different lexical patterns:If a pattern does not yieldthe correct answer,the analysis can be easily changed and rerun.The main disadvantage of lexical analysis is that it is(in general)not very precise,and that some language constructs are much harder to recognize lexically than others.For example,for the case study later discussedin this paper we need tofind thefiles that were executed from a DCL program,the DEC job control language for VAX VMS.In DCL,we can look for the string“run”,which is the DCL keyword for execution.If,on the other hand,we would want to know whichfiles are executed from a Bourne shell script,we would need to specify allbuilt-in functions of the Bourne shell language.There is no special keyword for execution in the shell,rather, it attempts to execute all words that are not built-in functions.Strings such as“open input”and“run”obviously can occur in different contexts,and may mean completelydifferent things in each context.These strings could occur in comment,for example,or inside a quoted string. Because we need to recognize different contexts in most cases,much of the original simplicity of the lexical pattern is gone.Furthermore,as long as we do not specify the full syntax of a language,there is the risk that we may have overlooked particular contexts in which a pattern can or cannot occur.Most commonly used for lexical analysis are Unix tools such as grep,awk,and perl.Murphy and Notkin[17],describe LSME,a system which allows for the lexical specification of contexts of patterns,as well as the patterns themselves.For the analysis of Cobol,we have developed recover[8],which keeps track of the global structure of Cobol,and allows the user to specify patterns typically required in a program understanding context.2.2Syntactic AnalysisMore precise analysis of source code can be achieved by taking the syntactic structure of the code into account,analyzing the abstract syntax tree instead of the individual source code lines.This makes the context in which a particular construct occurs explicitly available.Moreover,it abstracts from irrelevant details,such as layout and indentation.Unfortunately,most legacy systems are written in languages for which parsers are not readily available.Developing a grammar from which to generate such a parser requires a significant investment.As an example,Van den Brand et al.[2]report a period of four months needed to develop a fairly complete Cobol grammar. For program understanding and documentation purposes,however,only a handful of language constructs are needed,so it seems too much work to have to specify the full grammar of a legacy language.Therefore,we propose the use of“island grammars”,in which certain constructs are parsed in full detail,whereas others are essentially ignored.2.3Island GrammarsAn island grammar consists of(1)detailed productions for the language constructs we are specifically interested in(2)liberal productions catching all remaining constructs;and(3)a minimal set of general definitions covering the overall structure of a program.As an example,suppose we have a simple language L.Programs in L consist of a list of one or morestatements.For documentation generation purposes we are only interested in one statement,the“SELECT”statement.The definition of the island grammar is in Figure2.We use the grammar definition language SDF2 [20]for our definition.1We can distinguish the following groups of productions:•The definition of the statement of interest is on line(3),defining a a statement to be produced by the keywords“SELECT”,a FileHandle,“ASSIGN”,“TO”,a FileName,a possibly empty list of Options, terminated with a“.”character.Productions(4–9)define the details of the other non-terminals.4syntaxStat+→Program(2.1)∼[\.]+”\.”→Stat(2.2)”SELECT”FileHandle”ASSIGN””TO”FileName Option∗”.”→Stat(2.3) Name+(”IS”)?Value→Option(2.4)Id→FileName(2.5)Id→Name(2.6)Id→Value(2.7)Id→FileHandle(2.8) [A−Z][A−Z0−9\5Leveloverall purpose,list of subsystemspurpose,list of modules,batch jobs,databases,screens,...programs started,databases accessed,fre-quency,...behavior,programs called,databases read orwritten,invoked by,parameters,...functionality,external calls,sections per-formed,conditions tested,variables used,...Figure3:Cobol system hierarchy,with associated documentation requirementsin different modules.This way,for each analysis we can have a different grammar that is an extension of a common core language.This helps to keep the grammars as small and concise as possible.Consider the island grammar developed above.Here,productions(1),(2),and(5–9)can be viewed as being part of the core of language L.These can be put in a separate module.Then,the only productions needed for our“SELECT”analysis are productions(3–4),and the priority rule,which should be defined in a different module.2.4Parse Tree AnalysisThe parser generated from the grammar in the previous section will return parse trees that can be easily anal-ysed.The parse trees are encoded in aterm format[1].This parse tree can be read in by a Java framework we wrote,thus giving access to the parse tree as a Java object.The framework implements the visitor design pattern[11],via a visitor class that can be specialized to perform a particular analysis on the tree.This is sim-plified by the fact that the Java framework has full knowledge of the island grammar that has been specified, and contains methods for matching patterns of productions in the grammar to corresponding nodes in the tree. The analysis results that are of interest can be written to a repository,and from there they can be combined, queried and used in the rest of the documentation generation process.All extractions described in Section3 were performed using this Java parse tree analysis framework.The data extracted were put in a repository.The presentations described in Section4were then generated from that repository.This way of analyzing source code is similar in concept to a number of other systems,e.g.CIAO[5],in the sense that there is a chain of analysis,filter,and presentation events.In our approach,however,we startfiltering the data during thefirst(analysis)phase,because we only deal with those language constructs defined in the island grammar.3.E XTRACTING D OCUMENTATIONIn this section,we will discuss the sort of information that should be contained in software documentation,and how this information can be identified in the legacy sources.3.1Manual versus Automated ExtractionGiven the choice between manual or automatic extraction of information from source code automatic extraction (for example using island grammars)is the preferred option:it is consistent with the actual source code,and can be easily maintained by automatic regeneration.If generation is not feasible,the facts needed to construct the documentation can be provided by hand.This may take the form of a list of programs and a one or two line description of their functionality.Whenever documentation is generated,data from this list is included as well.Moreover,automatic checks as to whether all programs are indeed contained in the lists can be made whenever documentation is regenerated,encourag-ing programmers to keep the descriptions up to date.The integration of manual and automated extraction is illustrated in Figure1,which also shows how additional forms of externally available documentation can be included in the resulting documentation.63.2System DecompositionWe can decompose a large software system into several layers of abstraction,ranging from individual proce-dures up to the overall system.At each level,we need documentation,helping us to answer questions about the purpose(why?)of a component,the subcomponents it consists of(part-of relationships),the components it needs to perform its tasks(uses relationships),the way in which it performs its tasks(how?),the way in which the component can be activated(usage conditions),the system requirements the component corresponds to,etc. The actual splitting in layers of abstraction,and the corresponding documentation requirements,will differ from system to system.The hierarchy with associated documentation requirements we use for Cobol systems is shown in Figure3.3.3Aggregation and Use RelationsThe parts-of and uses relationships discussed in the previous section can be easily derived from the source code. In general,it is relatively straightforward to extract facts about calls,database usage,screens used,etc.A factor complicating this extraction is that many legacy systems use non-standard conventions for,e.g., calling or database access.We have seen calling conventions in which all calls were redirected via an assembly utility,and database access conventions hiding all SQL operations via a set of Cobol modules.Theflexibility of island parsing makes it particularly easy to tailor the extractors to such conventions.3.4System and Subsystem PartitioningAt the system level,the directory structure of programfiles or the naming conventions used usually provide a candidate partitioning into subsystems.If these are absent,or perceived as inadequate,we use automatic subsystem classification techniques to arrive at a better partitioning[15,9].Such alternatives can then be added to the documentation,helping the user to see component relations that do not immediately follow from the actual partitioning.In addition to the decomposition of the overall system,short descriptions of the individual subsystems as well as of the overall behavior are needed in the documentation.In many cases,such top level documentation may already be available,in which case it can be included in the documentation generation process.If it is not,a description of the various subsystems should be added by hand.3.5Program DescriptionsIn many systems,coding standards are such that each program or batch job starts with a comment prologue, explaining the purpose of this component,and its interaction with other components.If available,such a comment prologue is a very useful documentation ingredient which can be automatically extracted from the source.Observe that it is generally not a good idea to extract all comment lines from a program’s source into its documentation:many comment lines are tightly connected to specific statements,and meaningless in isolation. Moreover,in many cases obsolete pieces of code have been“commented out”,which clearly should not appear in system documentation.3.6Section DescriptionsFor the sections(local procedures)of a Cobol program,it is usually not as easy to extract a description as it is for Cobol programs starting with a comment prologue.On the positive side,however,section names are generally descriptive and meaningful,explaining the purpose of the section.This is unlike Cobol program names,which generally have a letter/number combination as name indicating which subsystem it is part of,not what its purpose is.Since we encountered an actual need for the documentation of sections that consisted of more than just the name,but at the same time was more abstract than simply the complete source code,we decided to search for ways in which to select the essential statements from a section.In terms of the theory of program comprehen-sion as proposed by Brooks[3],we try to select those statements that act as beacons for certain understanding tasks,such asfinding out under what condition a certain piece of code is being executed.Statements we include in such section descriptions are the conditions checked in control statements,calls to external programs,database access operations,calls to other sections,statements containing arithmetic compu-7Figure4:An example of the visualization of data dependencies in a DCL jobtations,and comment lines.This explicitly excludes the frequently occurring MOVE statement,which in Cobol is an assignment from one variable to another.As it does nothing“interesting”(no computation)leaving it out of the documentation directs the reader’s attention to those statements that do perform some task of interest. Following this strategy,the length of a summarized section is about one third of the length of its full source code.To make the summary as comprehensible as possible,we reuse indentation of the original source code, giving the reader a sense of the nesting level.3.7Batch Job DependenciesFor a Cobol mainframe application,the batch jobs determine which programs are started every day or night, and how datafiles used for communication purposes are sorted,renamed,etc.In many cases,the batch jobs are the least understood components of a Cobol system,in spite of their importance for the daily data processing. Documentation should help in understanding such batch jobs.We have experimented with visualizing the data dependencies in such batch jobs.Finding the data dependencies for a batch job is a three step process.First,we identify the Cobol programs executed in a batch job.Second,we analyze these Cobol programs,determining which datafiles are read and which ones are written.Third,we return to the batchfiles,to see whether these datafiles occur in them,for example for sorting or renaming.Recognizing these dependencies involves two island grammars:one for the job control language,finding the execution,sort and renaming statements,and one for Cobol,identifying the datafile manipulation statements. Once the data dependencies are found,they can be visualized.The visualization of an example batch job is shown in Figure4.The resulting graph only shows the functional dependencies:Dynamic dependencies,such as the order of execution,are not explicitly visible.Also observe that in some cases,it will be impossible to determine the name of a datafile,because it is determined at run time.Special nodes in the graph are used to mark suchfile names.4.P RESENTING D OCUMENTATIONOnce we have decided which information to put into the documentation,we can decide how to present that information to the user.Hypertext has been proposed as a natural way of presenting software documentation [4,18]as the hyperlinks can be used to represent,for example,part of and uses relationships between between8the documented components.The most natural way of organizing all the information derived is to follow the system hierarchy,producing essentially one page per component.For Cobol this would result in pages corresponding to the full system, subsystems,programs,batch jobs,and sections,following the decomposition of Figure3.If a user knows what programs he wants to read about,finding an initial node to start browsing is simple. In many cases,however,there may not be such a straightforward starting point.Therefore,we provide various indexes with entry points to the hypertext nodes,such as:•Alphabetic index of program names;•Keyword search on documentation contents;•Graphs representing use relationships.In particular,navigating through a call graph may help tofind execution starting points or modules frequently used.We have used the graph drawing package dot[12] to integrate clickable image maps for various call graphs and data-dependency graphs into generated documentation.In order to prevent visual cluttering of graphs,we have applied node concentration on them,as can be seen in Figure4.•Hand-written indexfiles,establishing links between requirements and source code elements.Many presentation issues are not specific to software documentation.By using a standard format such as HTML,the generated documentation can benefit from various future developments of the Web,such as search engines,page clustering based on lexical affinity,link generation from textual documentationfiles,the use of XML to establish a better separation content from presentation,etc.5.B USINESS C ASEWe have used all the techniques and ideas discussed in this paper in a commercial project aiming at redocu-menting a Cobol legacy system.In this section,we describe ourfindings.5.1BackgroundPensionFund is a system for keeping track of pension rights of a specific group of people in the Netherlands. It consists of approximately500Cobol programs,500copybooks,and150DEC DCL batch jobs,totaling over 600,000lines of code.The main tasks of the system are processing pension contributions and pension claims. Several years after the initial delivery,the organization responsible for PensionFund decided to outsource all maintenance activities to a division of Dutch software house R OCCADE,specializing in software management and maintenance.In order to make a realistic estimate of the anticipated maintenance costs involved before accepting maintenance commitments,R OCCADE performed a system scan in which a number of key factors affecting maintainability are estimated.One of the outcomes of the scan was that the documentation for PensionFund was not adequate.In fact, documentation was not kept up to date:for example,although in1998a number of major PensionFund mod-ifications were implemented,the documentation was never updated accordingly.Very little documentation maintenance had been performed,although the need for documentation grew as more and more programmers who had participated in the original design of PensionFund moved to other projects.The lack of proper documentation resulted in:•A growing backlog of major and urgent modification requests,which by early1999had risen to12.•Difficulty in carrying out adequate year2000tests,since the documentation did not help to identify the sources of errors encountered during testing.•Difficulty in mapping modification requests,phrased in terms of desired functionality modifications,onto changes to be made in actual programs.•Difficulty in splitting the large number of daily batch jobs into clusters that could be run independently and in parallel,which was becoming necessary as the increasing number and size of the batch jobs caused the required daily compute time to grow towards the upper limit of24hours.9 5.2Documentation WishesTo remedy these PensionFund problems,a redocumentation project was planned.The plan was to compose a number of MS-Word documents,one per program,containing:•A short description•Calls made(from other Cobol programs or batch jobs)to this program,and calls made from this program;•Database entities as well asflatfiles read and written;•Dataflow diagram;•Description of functionality in pseudo-code.Apart from the per program documentation,per batchfile one dataflow chart was planned for.Management was willing to make a significant investment to realize this documentation.Initially,the idea was to write this documentation by hand.This has the advantage that documentation writers can take advantage of their domain or system knowledge in order to provide the most meaningful documentation.Unfortunately,hand-written documentation is very costly and error prone.Because it is not a job many people like to do,it is difficult tofind skilled documentation writers.Therefore,it was decided to try to generate the documentation automatically.This has the advantages that it is cheap(the tools do the job),accurate,complete,and repeatable.If necessary,it was argued,it could be extended with manually derived additional information.5.3Derived DocumentationThe contents requirements of the PensionFund documentation corresponds to the wishes discussed the previous section.The specific information derived per program is shown in Figure5.Arriving at this list and determin-ing the most desirable way of presentation was an interactive process,in which a group offive PensionFund maintenance programmers was involved.The fact extraction phase mainly involvedfinding the structure of PERFORM,CALL,and database access statements,and was implemented using island parsing.For those extraction steps for which a line by line scan was sufficient(for example,Cobol comment extraction),or for the ones which required the original layout and indentation(summarizing sections)lexical analysis was implemented using Perl.The result of the fact extraction was a set of relations,which were combined into the required relations per program using Unix utilities such as join and AWK.Thefinal production of HTML code from the resulting relationfiles was written using Perl.All the documentation per program could be generated automatically.Even the the two-line description per program could be generated,as this was an easily recognizable part of the prologue comment.Had this not been the case,this would have required a manual step.As top level indices we generated alphabetic lists,lists per subsystem,and clickable call graphs.Moreover,we composed one index manually,grouping the programs based on their functionality.As a separate top level view,we used the data dependency visualization we derived from the batchfiles. For each DCLfile,we used the techniques described in Section2tofind all Cobol programs that are executed. We then analyzed these Cobol programs tofind the datafiles they read and write ing static analysis it is impossible tofind all the datafile names,because,in this system,somefile names were obtained dynamically. This occurs especially in error conditions,where the name of thefile to write the error data to is somehow related to the kind of error.Thefiles we could notfind names for are only a small fraction of all datafiles. In order to visualize these unnamedfiles at a later stage,we introduced specialfilenames for thesefiles.In Figure4these unresolvedfilenames can be seen on the left side,and are clearly marked:“unresolved”.The list of datafiles was then matched against the DCLfiles again,to see whether the data was manipulated there.In the PensionFund system,we looked at the sort statement,which takes onefile and a number of sort parameters,and writes to a differentfile.They are visualized as diamonds in thefigure.An example browsing session trough the generated documentation is shown in Figure6.A typical session would be a maintenance programmer trying tofind out why a particular batch job did not work as expected.。
Known-item retrieval on broadcast TVJ.A. List, A.R. van Ballegooij, A.P. de Vries Information Systems (INS)INS-R0104 April 30, 2001Report INS-R0104ISSN 1386-3681CWIP.O. Box 940791090 GB AmsterdamThe NetherlandsCWI is the National Research Institute for Mathematics and Computer Science. CWI is part of the Stichting Mathematisch Centrum (SMC), the Dutch foundation for promotion of mathematics and computer scienceand their applications.SMC is sponsored by the Netherlands Organization for Scientific Research (NWO). CWI is a member of ERCIM, the European Research Consortium for Informatics and Mathematics.Copyright © Stichting Mathematisch Centrum P.O. Box 94079, 1090 GB Amsterdam (NL) Kruislaan 413, 1098 SJ Amsterdam (NL)Telephone +31 20 592 9333Telefax +31 20 592 4199Known-Item Retrieval on Broadcast TVJohan List,Alex van Ballegooij,Arjen de Vries{j.a.list,alex.van.ballegooij,arjen.de.vries}@cwi.nlCWIP.O.Box94079,1090GB Amsterdam,The NetherlandsABSTRACTMany content-based,multimedia retrieval systems are based on a feature-oriented approach to querying,mostly exposing afixed set of features(introduced at design time)for querying purposes.This restrictionto a limited set of features is problematic for two reasons:it restricts the expressiveness at the semanticlevel,and it seems unfeasible to obtain(a-priori)a sufficiently powerful set of features for all possiblequeries.We describe an alternative approach where users specify precisely the distinguishable characteristics of the desired result set.In this query process,the userfirst describes a representation of the content(basedon a feature or collection of features)and then tells the system how to apply the representation in thesearch.Our prototype video retrieval system allows the expression of such queries as a sequence of operations, on MPEG video and audio streams,that can be executed on our database system.While the low-leveldecompression stage is implemented in an imperative programming language,the actual retrieval approachis expressed in declarative database queries.We assessed this system with a case study in known-itemretrieval on broadcast video streams:detecting news bulletins in the stream,with the help of both audioand video information.2000Mathematics Subject Classification:68P20,68U991998ACM Computing Classification System:Information Search and Retrieval(H.3.3),Multimedia Infor-mation Systems(H.5.1)Keywords and Phrases:multimedia retrieval systems,multimedia database systems,known-item retrievalNote:Work carried out under INS1MIA project and the DRUID project and the report has been submittedas article to CBMI20011.I NTRODUCTIONThe most challenging area of multimedia retrieval research is closing the semantic gap,bining and mapping low level media-specific features in an intelligent way to high-level concepts.Many multimedia retrieval systems focus on a feature-oriented approach to querying.A single representation of the content of multimedia objects is decided upon a-priori.Because an optimal representation for all queries does not seem to exist,researchers attempted to use a series of representations(termed a‘society of models’in[1]), and select a good representation based on user feedback,see e.g.[1,2].Choosing the right representation automatically is very important for naive users with generic queries. But,guessing the right query representation from user feedback is a very difficult problem,and has not been solved to a satisfactory level.The question arises whether the user can play a larger role in the‘articulation’of the query than simply saying yes or no to retrieved objects.This paper describes an approach in which users define their information needs precisely in terms of the distinguishing characteristics of desired responses.The query does not necessarily depend on a single rep-resentation of content:it can be constructed from several representations,possibly containing information from different media types.The analogy in image retrieval is our work on the‘Image-Spotter’[3],allowing its users to designate the regions of interest in an image.Consider for example a user searching for video segments with a space shuttle in orbit over the Earth. Looking at the characteristics of such segments,the user may come up with the following distinguishing characteristics:2.Multimedia Retrieval Systems2Figure1:Example Space Shuttle Search Image–the narration track(if there is any present)is likely to contain words such as‘space shuttle’,‘orbit’,‘earth’;–there will probably be a large amount of dark colors present in such a segment;–the space shuttle is mostly white;–Earth itself is characterized as a collection of blue,green,brown and white colors;–there is a certain spatial relation between the space shuttle and the Earth.The user specifies the representations for(several of)the characteristics mentioned above,and instructs the system on how to use these representations to answer the query.Some of these characteristics may be complex and difficult to specify by hand,such as the shape of the space shuttle or the variety of colors present in the Earth.Then,an example image such as the one shown in Figure1serves as a reference for the system,to determine these characteristics automatically.We present a work in progress to support the query process sketched in the(hypothetical)example above. We designed and implemented a prototype video retrieval system,and tested its merits with a case study in known-item retrieval.This case study comprisedfinding the delimiting segments of Dutch news bulletins, from broadcast video streams containing commercials before the start and after the ending of the news bulletins.The structure of the rest of the paper is as follows.Section2discusses relevant work in thefield of multimedia retrieval systems,followed by Section3containing our approach.Sections4and5describe our prototype system architecture and the case study respectively.Sections6and7present our experimental results,conclusions and directions for future research.2.M ULTIMEDIA R ETRIEVAL S YSTEMSFor the answering of queries,many multimedia retrieval systems focus on content-based retrieval:the analysis and extraction of low-level,media-specific features,followed by a similarity search in the feature spaces available.We have looked at two specific cases of multimedia retrieval systems:video and audio retrieval systems.Thefirst video retrieval systems considered video retrieval as a special case of image retrieval.An example of such a system is QBIC[4].In QBIC,video data isfirst segmented at the shot level,after which key frames are chosen or generated(mosaic frames in the case of shots with panning motion).Object detection was achieved through analyzing the layered representation of video:the detection of regions with coherent motion.VideoQ[5]introduced the concept of video objects.Video objects are sets of regions which display some amount of consistency,under certain criteria,across several frames.As in QBIC,video data isfirst segmented at the shot level after which the global background motion of each shot is determined.Color-, edge-,and motion-information of regions within the shot is then extracted to track possible video objects across several frames.Query interfaces for multimedia retrieval can be distinguished into textual or visual query interfaces. Visual query interfaces mostly comprise query by feature or feature combinations(local or global features) and query by sketch or example.The VideoQ system allows the sketch of a motion trajectory for a query object as well,to capture the motion aspect present in video data,and a time duration for the searched video segment(either an absolute duration or an intuitive measure such as’long’,’medium’or’short’).3.Our Approach3Figure2:System ArchitectureAn interesting approach to querying video material,which comes closest to what we want to achieve, is described in[6]and is built on top of the VideoQ system.Querying is performed with semantic visual templates,i.e.a set of icons or example objects that form a representation of the semantic the user is searching for.An icon is an animated sketch,composed of graphical objects which resemble the real-world objects.For searching the system,features are extracted from the icons or example objects and a similarity search is performed.A similar content-based retrieval approach has been followed in audio retrieval research[7].Muscle-fish[8]extracts features such as loudness,pitch and brightness.In querying the audio database,the user can specify an example sound from which features are extracted and used during query processing. SpeechSkimmer[9]is a system developed for interactively browsing or skimming of speech.The system uses time compression processing to enable users to listen to speech several times the speed of real-time, based on the notion that human speech can be understood much quicker than the normal speech rate. Speech sounds can be segmented by using information about the speaker’s pitch or the use of speaker identification.An important feature of audio retrieval systems is the query by example feature,where an example sound segment is provided to the system for querying.Instead of using example sound segments,the system developed at the University of Waikato[10]uses acoustic input for querying:the user can hum a tune.Audiofiles present in the system are then analyzed for tunes similar to the hummed tune,using pitch extraction or string-matching algorithms(in the case of the Waikato system).3.O UR A PPROACHA problem is that in many content-based retrieval systems,a standard set of features is present(introduced at design time)for the similarity searches.We deem it impossible to determine a-priori a standard set of features which can be used for answering all queries possible.Moreover,there are only so many objects, concepts or relations to describe as the used feature space(combinations)allow to express.Most impor-tantly,it is impossible to a-priori determine which feature space best captures higher-level concepts for all possible domains,a standard feature-set may not be suitable to exploit available domain knowledge. Since domain specific search strategies generally result in better answers than generic approaches,this is an important issue.Based on the above,we also conjure that,in the case of a layered or integrated approach to multimedia retrieval(examples include[11,12,13]),the restrictiveness in feature spaces present causes restrictiveness at the higher levels.In our eyes,users can contribute to crossing the semantic gap with a top-down oriented approach,by taking advantage of the domain knowledge of the(expert)users themselves.A multimedia retrieval system should therefore offer the users the possibility to define their information need in terms of distinguishable characteristics of desired responses.Note that we do not completely dismiss content-based retrieval based on media features.At some point during the query process,media features and the accompanying similarity search will be necessary.We only question the manner in which features are used for retrieval in many systems today.To test and illustrate our approach,we built a prototype video retrieval system,of which the architecture is shown in Figure2.The lowest level of the system consists of the raw bit streams(such as video streams,4.System Architecture4 audio streams and other media)that are stored in a multimedia database system.The information that is effectively stored in the database system is the information needed to both de-compress and decode the audio and video data of the MPEG bit stream.The next abstraction level consists of media-specific analysis or feature extraction elements(such as frames from a video stream or sample data from an audio stream).We propose that the construction of the elements,in this second level of abstraction,the frames and sample data in itself can be regarded as a view on the low-level bit stream. The video frames and sample data form the basis for feature extraction and analysis,which in our system is written in a higher-level database interface language and consist of sequences of operations on database tables.This is what allows the exact feature extraction algorithms to be chosen at query-time,thus allowing for the most appropriate ones with respect to the domain to be used.The system as described here is not a complete video retrieval system.It should be considered as a framework consisting of basic operations that requires an actual retrieval model to be built by means of a query-definition for each separate retrieval problem.The abstract interface that is a result of the use of a database system allows such systems to be build relatively easily and quickly.Nevertheless it does place a larger burden on the users.In order to reduce this burden we plan to construct an intermediate layer between the system and the user.This layer will act like a domain-independent thesaurus.This thesaurus component can offer high level primitives(such as’list all speech segments in a certain stream’together with a description of the appropriate media-representations needed for answering this sub-query).The prototype system as it stands now is aimed at expert users with knowledge of both video-and audio processing.Our intent is to extend the system further for making it usable by ordinary users(see Section7).4.S YSTEM A RCHITECTUREFor storage of the information needed for both decompression and decoding of the video-and audio streams (see Figure2),we used the Monet database system[14].The main advantage of using database systems for information retrieval task is that they“offerflexible declarative specification means and provide efficiency in execution,through algebraic optimization in the mapping process from specification to query execution plan”[15].4.1The Monet Database SystemThe Monet database system is a main-memory,parallel database system developed at CWI[14].Its data model is based on Binary Association Tables(BATs),which are two-column tables consisting of a head and tail value.Monet offers aflexible algebra to manipulate the BATs present.The design goals of Monet included high performance and extensibility.High performance is gained through the execution of operations in main memory,the exploitation of inter-operation parallelism,the frequent use of bulk operations which optimizes cache usage and a simple data model.Extensibility is offered through modules,in which application programmers can define new data types(called atoms), commands,operators,and procedures.Procedures are written in MIL(Monet Interface Language)which is a higher-level language defined for user-level interaction with the database.At this point,MIL plays the role of a higher-level specification language.We plan to introduce a layer on top of MIL with a query language more suited for the multimodal access and will then be used as an intermediate layer for communication between user interface components and the retrieval system as a whole.4.2Multimedia ExtensionsFor our experiments an MPEG video and an MPEG audio module were implemented in C.Decoding of the video and audio streams were implemented as commands whereas much of the feature extraction and analysis operations were written in MIL.The video module consists of a set of database commands that allow both the decompression and if needed decoding of a given range of frames from an MPEGfile.This is needed since we want users to have access to all data in the video stream.We consider storing complete decompressed video streams unfeasible because of the enormous amount of data present in such a stream.A rough calculation indicated that10 hours of video(approximately5GB of MPEG I data)requires500GB for storage of the decompressed video,audio and basic indexing information.The database commands allow users to access both compressed-domain1and image-domain data to search in the video stream.Analogous to the video module,the audio module primarily consists of a set of database commands that allow both the decompression and decoding of a given range of audio frames from the audio stream in an MPEGfile.Even though the size of audio-data is only a fraction of the video data present in a stream many of the same database-size restrictions hold and the storage of all decompressed data is unfeasible.In addition to these decompression and decoding functions the audio module contains a set of standard signal processing functions that users can use to extract interesting features from the audio data.The nature of the system encourages(expert)users to define custom algorithms,but we assume certain operations to be of such importance that optimized implementations are provided in the form of database operations.Examples of such optimized operations are color space conversion and histogram construction, which are relatively low-level and computationally expensive operations.5.C ASE S TUDYIn the case study used to illustrate our approach,we focused on a specific known-item retrieval problem in broadcast news,namely tofind the starting and ending time indices of news broadcasts.This specific case study was chosen to complement an automated news recording system that is in use for the construction of a data warehouse of news broadcasts[16].Until now,the determination of the exact beginning and end of each recorded news broadcast had to be done by hand.5.1Characteristics of News BroadcastsNews broadcasts are relatively structured and are often characterized by the following properties:Specific”delimiter”parts Broadcast news programs often start with a specific frame or set of frames with a constant layout,see Figure3.In our case thefirst frame of the opening sequence shows the NOS2logo on a black background.The last frame of the end sequence shows a globe surrounded by some graphics and a caption that announces the NOS website[17].A similar argument can be made for the audio part of the broadcast.Recognizable tunes are played during the start and end sequences.Additionally,the start and end sequences are accompanied by short periods of silence. Speech-richness of broadcast news News broadcasts often contains mostly speech,whereas the surround-ing commercials are characterized by a more rapid mix of both speech and music.Also,commercial segments appear louder to the human ear in comparison to speech signals.High cut frequency of commercial material Commercial material is often allocated a certain slot of time during which a certain amount of commercials is shown.The average duration of a commercial is 30seconds in which all product-related information must be placed.Hence,we can expect a higher level of action(or cut frequency)during commercials.Note that these characteristics can be collected by watching a collection of news broadcasts.5.2Audio and Video Feature ExtractionGiven the characteristics of our domain as described in Section5.1,we chose to use a number of suitable feature extraction and matching algorithms.Note that we chose these specific features because we found them to be applicable to this specific problem,the system in no way prescribes or limits us to the use of the features presented here.In choosing suitable feature extraction and matching algorithms,we focused on the the specific delimiter parts of Dutch news broadcast.Also note that two media types(video and audio) are used in analyzing the streams.The main idea is that,given the problem,simple,media specific features are sufficient for solving the problem when combined during retrieval.1What is usually referred to as’compressed domain’data is in reality already decompressed.MPEG video data is conceptually decoded in two stages.Firstly Huffman decompression of the bit stream results in motion vectors,and DCT domain pixel descriptions, this is the data usually referred to as compressed-domain.Secondly this data can be used to construct the actual images contained in the video stream.2NOS stands for”Nederlandse Omroep Stichting”,the Dutch broadcast organization.Figure 3:The begin ∗and end frames of a news broadcast ∗Note that the box depicted in the begin image indicates the bounding box used in the image matching step and is notpart of the actual image.The first step of the algorithm is the analysis of the audio stream,and specifically the occurrence of high energy parts in the signal.The delimiter parts of news broadcasts are accompanied by specific tunes that are played.Both these tunes show up in the analysis as periods of high energy.The time index generated during analysis of the audio stream is used by the video module to prune the video stream search space.The frame segments surrounding the high energy periods are decompressed and decoded and analyzed for the occurrence of the specific delimiter frames.Audio Features Feature extraction of the audio signal uses compressed MPEG audio data [18].Using compressed MPEG audio data for audio analysis has proven to be an efficient method giving good results compared with analysis on time-domain waveform data [19,20].MPEG audio compression is based on perceptual audio coding,which uses the characteristics of the human auditory system in order to gain higher compression ratios.In order to analyze the audio signal,the approach in [20]was followed where a mean sub band vector is calculated for each frame.A frame is a collection of sub band vectors,each specifying the spectral content of 32raw input samples.In Layer II MPEG audio a frame consists of 3sub frames,each containing 12groups of 32-element sub band sample vectors.A mean sub band vector can then be calculated as follows:M [i ]= i =1S t [i ]3∗12,i =1..32The collection of mean sub band vectors is then further analyzed on energy content,which indicates the loudness of the frame,and is used in detecting periods with high energy in the audio signal.Note that the concept of high energy can indicate a variety of possibilities.A high energy segment can include music,loud speech,or loud background sounds.So care must be taken not to assume too much on the basisvar beginImages := shots.getKeyFrames_First(fname).rankImages("beginImage_2001");var endImages := shots.getKeyFrames_Last(fname).rankImages("endImage_2001");var intervals := high_energy_thresh(high_energy(rmsValues), int(10), dbl(0.85));var rmsValues := [rms](mean_vectors(decodedStream);var decodedStream := decode_audio_stream(fname);var shots := bat(int, int);# fname contains filename of MPEG stream to be examinedintervals@batloop { shots.insert(getShotsForTimeSlice(fname, $h, $t); }var begin := beginImages.reverse.fetch(0);var end := endImages.reverse.fetch(0);printf("The news starts at frame %d and ends at frame %d\n", begin, end);Figure 4:(a)Query-Graph for start and end detection.(b)A snippet of the corresponding MIL query.of high energy information,als analysis windows must be chosen with care to ensure the best possible classification.Analogous to the high energy variant,periods with low energy in the audio stream can be detected.Low energy periods can indicate silence segments.Video Features The video data consists of MPEG-I video streams[21].The smallest unit in a video stream is a frame,in other words a video stream is a sequence of images.In order to provide some structure in a large video stream it is useful to segment the stream into meaningful sections.There are two levels of video segmentation,thefirst level is the shot level,this essentially constitutes a piece of video shot by a camera in one consecutive go.A higher level of grouping could be managed in the form of scenes,scenes are groups of shots constituting a semantic unit,for example a distinct part of a story.A simple algorithm forfinding the specific delimiter parts(see Subsection5.1)in the video stream is as follows.First the video stream is segmented at shot level.The collection of detected shots is then analyzed further to give us key frames.The key frames are then given to an image comparison operation,where each key frame is compared with a given image tofind the one that matches best.Indexing the videos at shot level is good enough for rough video segmentation.Shot level segmentation can be achieved in a number of ways,amongst which a pair-wise image comparison of consecutive frames based on color histograms[22].Given the fact that in our problem domain,we are only looking for the hard cuts(the delimiter frames are sufficiently different from surrounding frames)the computationally rel-atively inexpensive approach of comparing image color histograms suffices.Histogram based cut-detection performs equally well to other more advanced cut-detection algorithms when hard-cuts are the subject of detection[23].Although certain color spaces perform better than others empirical examination of the prob-lem domain has shown that using only the luminance component of the images suffices to detect the fast majority of cuts in the data[24].More importantly,this relatively simple approach has shown to have near-zero miss rate in our data-set.Of course,there is a trade-off against a higher false-hit rate.The second step comprises key frame selection.Once again,for this problem the simplest approach is best,since we are either looking for the beginning of the news broadcast(which always starts at the beginning of a shot)or the end of the news broadcast(which ends at the end of a shot).Simply selecting either thefirst or last frame of a shot as the key frame is exactly what we want.The third step is image comparison.Since we exactly know what we are looking for and the start and end sequence is always the same computer generated animation,a basic pixel-wise image comparison function is adequate.The exact algorithm measures the difference between corresponding pixels on a macro block level.For the ranking of the possible start frames only the macro blocks contained in the bounding box depicted in Figure3are used essentially realizing a spatial constraint.For the ranking of the possible end frames the complete image is used.The difference between macro blocks is calculated as follows:MB diff(a,b)= 256i=1min((a i−b i)2255,100)The differences are combined in a vector,one dimension per macro-block.The length of this vector is used to express the difference between two images.6.R ESULTSThe test-set used for the experiments was composed of several news broadcasts recorded at different times during the day over a period of a few days,stored in MPEG-1format.The video content was encoded in standard PAL format,25frames of352x288pixels per second.The audio content was encoded with MPEG Layer II compression at44.1KHz16bit sampling.Thefirst algorithm we used was segmentation of the video data at shot level,followed by selection of key frames and a full-frame comparison of the key frames and example images.Although this approach has high precision,it is also very computationally expensive.We attempted to realize a speed-up in searching the video stream by using the low energy information from the audio stream.Assuming that we would be able tofind the silences that are present at the beginning and end of respectively the start and end sequence of the news broadcast.The introduction of the energy content analysis of the audio stream cut the execution time with roughly a factor of2.Although we were able to identify the silences we are interested in,we noticed that a large number of short silences is present in the test set.Thus,the percentage of video material that has to be examined even after only。