Centrum voor Wiskunde en Informatica Known-item retrieval on broadcast TV J.A. List, A.R. v
- 格式:pdf
- 大小:112.05 KB
- 文档页数:13
The Mogao Caves, nestled in the heart of Dunhuang, Gansu Province in China, are a treasure trove of ancient Chinese art and culture. These caves, also known as the Thousand Buddha Grottoes, are home to a vast array of exquisite murals that have stood the test of time, offering a glimpse into the rich history and religious beliefs of the past.The murals of the Mogao Caves are a testament to the artistic prowess and spiritual depth of the people who created them. Spanning over a thousand years, from the 4th to the 14th century, the murals depict a wide range of subjects, including Buddhist narratives, historical events, and scenes from daily life. The vibrant colors and intricate details of these paintings are a testament to the skill and dedication of the artists who labored to create them.One of the most striking features of the Mogao Caves murals is their use of color. The artists employed a rich palette of hues, from deep blues and greens to bright reds and yellows, to bring their subjects to life. The use of gold leaf adds a touch of opulence and grandeur to the paintings, reflecting the wealth and prosperity of the time.The murals also serve as a visual narrative of Buddhist teachings and stories. Scenes from the life of the Buddha, such as his birth, enlightenment, and death, are depicted in a series of panels, illustrating the key events and moral lessons of his life. The paintings also feature depictions of bodhisattvas, celestial beings, and other figures from Buddhist mythology, each with their own unique attributes and symbolic meanings.In addition to their religious significance, the Mogao Caves murals also provide a window into the daily life and customs of the people of ancient China. Scenes of farming, trade, and social gatherings offer a glimpse into the social structure and way of life of the time. The attention to detail in these scenes is remarkable, with even the smallest objects and figures rendered with precision and care.The preservation of the Mogao Caves murals is a testament to the efforts of countless individuals and organizations over the years. Despite the ravages of time, natural disasters, and human interference, these precious works of art have been carefully restored and protected. The use of advanced conservation techniques and the establishment of the Mogao Grottoes Research Institute have played a crucial role in ensuring the longevity of these cultural treasures.However, the Mogao Caves murals also face significant challenges in the modern era. Climate change, increased tourism, and environmental pollution pose threats to the delicate balance of the caves microclimate, potentially causing irreversible damage to the murals. Efforts are being made to mitigate these risks, such as limiting the number of visitors and implementing strict environmental controls within the caves.In conclusion, the Mogao Caves murals are a remarkable testament to the artistic and cultural heritage of ancient China. Their intricate designs, vibrant colors, and profound religious and historical significance make them a mustsee destination for anyone interested in exploring the richtapestry of Chinese history and art. As we continue to appreciate and preserve these treasures, we also bear the responsibility of ensuring their survival for future generations to marvel at and learn from.。
AMBULANT:A Fast,Multi-Platform Open Source SML Player Dick C.A. Bulterman, Jack Jansen, Kleanthis Kleanthous, Kees Blom and Daniel Benden CWI: Centrum voor Wiskunde en InformaticaKruislaan 4131098 SJ Amsterdam, The Netherlands +31 20 592 43 00 Dick.Bulterman@cwi.nl ABSTRACTThis paper provides an overview of the Ambulant Open SMIL player. Unlike other SMIL implementations, the Ambulant Player is a reconfigureable SMIL engine that can be customized for use as an experimental media player core.The Ambulant Player is a reference SMIL engine that can be integrated in a wide variety of media player projects. This paper starts with an overview of our motivations for creating a new SMIL engine then discusses the architecture of the Ambulant Core (including the scalability and custom integration features of the player).We close with a discussion of our implementation experiences with Ambulant instances for Windows,Mac and Linux versions for desktop and PDA devices.Categories and Subject Descriptors H.5.1 Multimedia Information Systems [Evaluation]H.5.4 Hypertext/Hypermedia [Navigation]. General TermsExperimentation, Performance, V erification KeywordsSMIL, Player, Open-Source, Demos1.MOTIV ATIONThe Ambulant Open SMIL Player is an open-source, full featured SMIL 2.0 player. It is intended to be used within the researcher community (in and outside our institute) in projects that need source code access to a production-quality SMIL player environment. It may also be used as a stand-alone SMIL player for applications that do not need proprietary mediaformats.The player supports a range of SMIL 2.0 profiles ( including desktop and mobile configurations) and is available in distributions for Linux, Macintosh, and Windows systems ranging from desktop devices to PDA and handheld computers. While several SMIL player implementationsexist,including the RealPlayer [4], InternetExplorer [5], PocketSMIL [7],GRiNS [6],X-SMILES [8] and various proprietary implementations for mobile devices, we developed Ambulant for three reasons:Permission to make digital or hard copiesof all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish,to post on servers or to redistribute tolists,requires prior specific permissionand/or a fee.'MM' 04, October 10-16, 2004, New Y ork, New Y ork, USA.Copyright 2004 ACM 1-58113-893-8/04/0010...$5.00.•N one of the existi ng SMIL players provides a complete and correct SMIL 2.0 implementation. The Ambulant player implements all of SMIL, based on the SMIL 2.0 Language profile plus extensions to support advanced animation and the needs of the mobile variant used by the 3GPP/PSS-6 SMIL specification [9]. •A ll commercial SMIL players are geared to the presentation of proprietary media. The Ambulant player uses open-source media codecs and open-source network transfer protocols, so that the player can be easily customized foruse in a wide range of researchprojects.• Our goal is to build a platform that will encourage the development of comparable multimedia research output.By providing what we expect will be a standard baseline player, other researchers and developmentorganizations can concentrate on integratingextensions to the basic player (either in terms of new media codecs or new network control algorithms). These extensions can then be shared by others.In contrast to the Helix client architecture [10], which also moved to a GPL core in mid-2004, the Ambulant player supports a wider range of SMIL target application architectures,it provides a more complete and correct implementation of the SMIL language,it provides much better performance on low-resource devices and it provides a more extensible media player architecture. It also provides an implementation that includes all of the media codecs as part of the open client infrastructure.The Ambulant target community is not viewers of media content, but developers of multimedia infrastructures, protocols and networks. Our goal has been to augument the existing partial SMIL implementations produced by many groups with a complete implementation that supports even the exotic features of the SMIL language. The following sections provide an introduction to the architecture of the player and describe the state of the various Ambulant implementations. We then discuss how the Ambulant Core can be re-purposed in other projects. We start with a discussion of Ambulant 's functional support for SMIL.2.FUNCTIONAL SUPPORT FOR SMIL 2.0The SMIL 2.0 recommendation [1] defines 10 functional groups that are used to structure the standard '5s0+ modules. These modules define the approximately 30 XML elements and 150 attributes that make up the SMIL 2.0 language. In addition to defining modules, the SMIL 2.0 specification also defines a number of SMIL profiles: collection of elements, attributes and attribute values that are targeted to meet the needs of a particular implementation community. Common profiles include the full SMIL 2.0 Language, SMIL Basic, 3GPP SMIL,XHTML+SMIL and SMIL 1.0 profiles.A review of these profiles is beyond the scope of this paper(see [2]), but a key concern of Ambulant ' sdevelopment has been to provide a player core that can be used to support a wide range of SMIL target profiles with custom player components.This has resulted in an architecture that allows nearly all aspects of the player to be plug-replaceable via open interfaces. In this way, tailored layout, scheduling, media processing and interaction modules can be configured to meet the needs of individual profile requirements. The Ambulant player is the only player that supports this architecture.The Ambulant player provides a direct implementation of the SMIL 2.0 Language profile, plus extensions that provide enhanced support for animation and timing control. Compared with other commercial and non-commercial players, the Ambulant player implements not only a core scheduling engine, it also provides complete support for SMIL layout,interaction, content control and networking facilities.Ambulant provides the most complete implementation of the SMIL language available to date.3.AMBULANT ARCHITECTUREThis section provides an overview of the architecture of the Ambulant core. While this discussion is high-level, it will provide sufficient detail to demonstrate the applicability of Ambulant to a wide range of projects. The sections below consider thehigh-level interface structure, the common services layer and the player com mon core architecture.3.1The High-Level Interface StructureFigure 1 shows the highest level player abstract ion. The player core support top-level con trol exter nal entry points (in clud ing play/stop/pause) and in turn man ages a collection of external factories that provide in terfaces to data sources (both for sta ndard and pseudo-media), GUI and window system interfaces and in terfaces to ren derers. Unlike other players that treat SMIL as a datatype [4],[10], the Ambula nt en gi ne has acen tral role in in teractio n with the input/output/scree n/devices in terfaces.This architecture allows the types of entry points (and the moment of evaluation) to be customized and separated from the various data-sources and renderers. This is important forintegration with environments that may use non-SMIL layout or special device in terface process ing.Figuit 1 k Ambulaittliigk-ljtwLstruchm.3.2The Common Services LayerFigure 2 shows a set of com mon services that are supplied for the player to operate. These in clude operati ng systems in terfaces, draw ing systems in terfaces and support for baseli ne XML fun ctio ns.All of these services are provided by Ambulant; they may also be integrated into other player-related projects or they may be replaced by new service components that are optimized for particular devices or algorithms. Hsurt 2. Amldant Common [Services Liwr/3.3The Player Common CoreFigure 3 shows a slightly abstracted view ofthe Ambula nt com mon core architecture. The view is essentially that of a single instanceof the Ambula nt player. Although only oneclass object is shown for eachservice,multiple interchangeable implementations have been developed for all objects (except the DOM tree) during theplayer 'development. As an example,multiple schedulers have bee n developed to match the fun cti onalcapabilities of various SMIL profiles.Arrows in the figure denote that one abstract class depends on the services offered by the other abstract class. Stacked boxes denote that a si ngle in sta nce of the player will con tain in sta nces of multiple con crete classes impleme nting that abstract class: one for audio, one for images, etc. All of the stacked-box abstract classes come with a factory function to create the in sta nces of the required con crete class.The bulk of the player implementation is architected to be platform in depe ndent. As we will discuss, this platform in depe ndent component has already been reused for five separate player impleme ntati ons. The platform dependent portions of the player include support for actual ren deri ng, UI in teract ion and datasource processing and control. When the player is active, there is asingle instanee of the scheduler and layout manager, both of which depend on the DOM tree object. Multiple instances of data source and playable objects are created. These in teract with multiple abstract rendering surfaces. The playable abstract class is the scheduler in terface (play, stop) for a media no de, while the renderer abstract class is the drawing in terface (redraw). Note that not all playables are ren derers (audio, SMIL ani mati on). The architecture has bee n desig ned to have all comp onents be replaceable, both in terms of an alter native impleme ntati on of a give n set of functionality and in terms of a complete re-purposing of the player components. In this way, the Ambulant core can be migrated to being a special purpose SMIL engine or a non-SMIL engine (such as support for MPEG-4 or other sta ndards).The abstract in terfaces provided by the player do not require a “ SMIL on Top” model of docume nt process ing. The abstract in terface can be used with other high-level control 4.1 Implementation PlatformsSMIL profiles have been defined for a widerange of platforms and devices, ranging fromdesktop implementations to mobile devices. Inorder to support our research on distributedmodels (such as in an XHTML+SMIL implementation), or to control non-SMILlower-level rendering (such as timed text).Note that in order to improve readability of theillustrati on, all auxiliary classes (threadi ng, geometry and color han dli ng, etc.) and several classes that were not important for general un dersta nding (player driver engine, transitions, etc.) have been left out of the diagram.4. IMPLEMENTATION EXPERIENCESThis sectio nwill briefly review ourimpleme ntatio n experie nces with theAmbula nt player. We discuss the implementation platforms used during SMIL ' s development and describe a set of test documents that were created to test the fun cti on ality of the Ambula nt player core. We con clude with a discussi on on the performa nee of the Ambula nt player.SMIL document extensions and to provide a player that was useful for other research efforts, we decided to provide a wide range of SMIL implementations for the Ambulant project. The Ambulant core is available as a single C++ source distribution that provides support for the following platforms:•Linux: our source distributi on in elude makefiles that are used with the RH-8 distribution of Linux. We provide support for media using the FF-MPEG suite [11]. The player interface is built using the Qt toolkit [12]. •Macintosh:Ambulant supports Mac OS X 10.3. Media rendering support is available via the internal Quicktime API and via FF-MPEG . The player user interface uses standard Mac conventions and support (Coca). •Windows: Ambulant provides conventional Win32 support for current generation Windows platforms. It has been most extensivelytested with XP (Home,Professional and TabletPC) and Windows-2000. Media rendering include third-party and local support for imaging and continuous media. Networking and user interface support are provided using platform-embeddedlibraries.•PocketPC: Ambulant supports PocketPC-2000,PocketPC-2002andWindows Mobile 2003 systems. The PocketPC implementations provide support for basic imaging, audio and text facilities.•Linux PDA support:Ambulant provides support for the Zaurus Linux-PDA. Media support is provided via the FF-MPEG library and UI support is provide via Qt. Media support includes audio, images and simple text.In each of these implementations, our initial focus has been on providing support for SMIL scheduling and control functions.We have not optimized media renderer support in the Ambulant 1.0 releases, but expect to provide enhanced support in future versions. 4.2 Demos and Test SuitesIn order to validate the Ambulant player implementation beyond that available with the standard SMIL test suite [3], several demo and test documents have been distributed with the player core. The principal demos include: •Welcome: A short presentation that exercises basic timing,media rendering, transformations and animation.•NYC: a short slideshow in desktop and mobile configurations that exercises scheduling, transformation and media rendering.•News: a complex interactive news document that tests linking, event-based activation, advanced layout, timing and media integration. Like NYC, this demo support differentiated mobile and desktop configurations.•Links: a suite of linking and interaction test cases.•Flashlight: an interactive user'sguide that tests presentation customization using custom test attributes and linking/interaction support. These and other demos are distributed as part of the Ambulant player web site [13].4.3Performance EvaluationThe goal of the Ambulant implementation was to provide a complete and fast SMIL player. We used a C++ implementation core instead of Java or Python because our experience had shownthat on small devices (which we feel hold significant interest for future research), the efficiency of the implementation still plays a dominant role. Our goal was to be able to read, parse, model and schedule a 300-node news presentation in less than two seconds on desktop and mobile platforms. This goal was achieved for all of the target platforms used in the player project. By comparison, the same presentation on the Oratrix GRiNS PocketPC player took 28 seconds to read, parse and schedule. (The Real PocketPC SMIL player and the PocketSMIL players were not able to parseand schedule the document at all because of their limited SMIL language support.)In terms of SMIL language performance, our goal was to provide a complete implementation of the SMIL 2.0 Language profile[14]. Where other players have implemented subsets of this profile,Ambulant has managed to implement the entire SMIL 2.0 feature set with two exceptions: first, we currently do not support the prefetch elements of the content control modules; second, we provide only single top-level window support in the platform-dependent player interfaces. Prefetch was not supported because of the close association of an implementation with a given streaming architecture. The use of multiple top-level windows, while supported in our other SMIL implementation, was not included in version 1.0 of Ambulant because of pending working on multi-screen mobile devices. Both of these feature are expected to be supported in the next release of Ambulant.5.CURRENT STATUS AND AVAILABILITYT his paper describes version 1.0 of the Ambulant player, which was released on July 12, 2004. (This version is also known as the Ambulant/O release of the player.) Feature releases and platform tuning are expected to occur in the summer of 2004. The current release of Ambulant is always available via our SourceForge links [13], along with pointers to the most recent demonstrators and test suites.The W3C started its SMIL 2.1 standardization in May, 2004.At the same time, the W3C' s timed text working group is completing itsfirst public working draft. We will support both of these activities in upcoming Ambulant releases.6.CONCLUSIONSWhile SMIL support is becoming ubiquitous (in no small part due to its acceptance within the mobile community), the availability of open-source SMIL players has been limited. This has meant that any group wishing to investigate multimedia extensions or high-/low-level user or rendering support has had to make a considerable investment in developing a core SMIL engine.We expect that by providing a high-performance, high-quality and complete SMIL implementation in an open environment, both our own research and the research agendas of others can be served. By providing a flexible player framework, extensions from new user interfaces to new rendering engines or content control infrastructures can be easily supported.7.ACKNOWLEDGEMENTS This work was supported by the Stichting NLnet in Amsterdam.8.REFERENCES[1]W3C,SMIL Specification,/AudioVideo.[2]Bulterman,D.C.A and Rutledge, L.,SMIL 2.0:Interactive Multimedia for Weband Mobile Devices, Springer, 2004.[3]W3C,SMIL2.0 Standard Testsuite,/2001/SMIL20/testsuite/[4]RealNetworks,The RealPlayer 10,/[5]Microsoft,HTML+Time in InternetExplorer 6,/workshop/author/behaviors/time.asp[6]Oratrix, The GRiNS 2.0 SMIL Player./[7]INRIA,The PocketSMIL 2.0 Player,wam.inrialpes.fr/software/pocketsmil/. [8],X-SMILES: An Open XML-Browser for ExoticApplications./[9]3GPP Consortium,The Third-GenerationPartnership Project(3GPP)SMIL PSS-6Profile./ftp/Specs/archive/26_series/26.246/ 26246-003.zip[10]Helix Community,The Helix Player./.[11]FFMPEG ,FF-MPEG:A Complete Solution forRecording,Converting and Streaming Audioand Video./[12]Trolltech,Qtopia:The QT Palmtop/[13]Ambulant Project,The Ambulant 1.0 Open Source SMIL 2.0Player, /.[14]Bulterman,D.C.A.,A Linking andInteraction Evaluation Test Set for SMIL,Proc. ACM Hypertext 2004, SantaCruz,August, 2004.。
python教程微盘网盘全集大全想学会编程语言无法一蹴而就,Python语言也不例外。
网上Python资料和视频也不少,但这些资料零散且不系统,遇到问题也让学习积极性非常受挫。
若大家想成为技术型的数据分析师,或者未来往数据挖掘、系统性开发或二次开发方向发展,接下来带给你的就是系统学习python教程微盘网盘全集。
千锋Python基础教程:/s/1qYTZiNEPython课程教学高手晋级视频总目录:/s/1hrXwY8kPython课程windows知识点:/s/1kVcaH3xPython课程linux知识点:/s/1i4VZh5bPython课程web知识点:/s/1jIMdU2iPython课程机器学习:/s/1o8qNB8QPython课程-树莓派设备:/s/1slFee2TPython发源于八十年代后期。
开发者是Centrum Wiskunde & Informatica的Guido van Rossum,这是位于荷兰阿姆斯特丹科学园区的一个数学和计算机科学研究中心。
之后Van Rossum一直是Python开发很有影响的人物。
事实上,社区成员给了他一个荣誉称号:终生仁慈独裁者(BDFL)。
经过初期的不起眼,Python已经成为互联网最流行的服务端编程语言之一。
根据W3Techs的统计,它被用于很多的大流量的站点,超过了ColdFusion, PHP, 和。
其中超过98%的站点运行的是Python 2.0,只有1%多一点的站点运行3.0。
关于如何学习Python语言,实质上,任何语言的核心内容都不多,Python显得更精简一些。
但是就这些内容,要用好也很不容易,而从数据类型、类,异常入手是最基本方法。
如果你需要更系统地学习Python语言,不妨来千锋Python培训班看看,负责教学的Python讲师,可以说是业界无可挑剔的讲师天团。
尹老师,刘老师,杨老师均是清华大学毕业的高材生,精通多门编程语言,拥有丰富的开发经验,多年IT名企工作经验以及丰富的项目实战经验。
·4587·[32]SHIROBE M,WATANABE Y,TANAKA T,et al. Effect ofan oral frailty measures program on community-dwelling elderly people:a cluster-randomized controlled trial[J]. Gerontology,2021:1-10. DOI:10.1159/000516968.[33]MATSUO K,KITO N,OGAWA K,et al. Improvement oforal hypofunction by a comprehensive oral and physical exercise programme including textured lunch gatherings[J]. J Oral Rehabil,2021,48(4):411-421. DOI:10.1111/joor.13122.[34]NOMURA Y,ISHII Y,SUZUKI S,et al. Nutritional status andoral frailty :a community based study [J]. Nutrients,2020,12(9):E2886. DOI:10.3390/nu12092886.[35]DIBELLO V,LOZUPONE M,MANFREDINI D,et al. Oralfrailty and neurodegeneration in Alzheimer 's disease[J]. Neural Regen Res,2021,16(11):2149-2153. DOI:10.4103/1673-5374.310672.[36]HIRONAKA S,KUGIMIYA Y,WATANABE Y,et al. Associationbetween oral,social,and physical frailty in community-dwelling older adults[J]. Arch Gerontol Geriatr,2020,89:104105. DOI:10.1016/j.archger.2020.104105.[37]BABA H,WATANABE Y,MIURA K,et al. Oral frailty andcarriage of oral Candida in community-dwelling older adults(Check-up to discover Health with Energy for senior Residents in Iwamizawa ;CHEER Iwamizawa)[J]. Gerodontology,2022,39(1):49-58. DOI:10.1111/ger.12621.[38]HIHARA T,GOTO T,ICHIKAWA T. Investigating eatingbehaviors and symptoms of oral frailty using questionnaires[J]. D e n t J (B a s e l ),2019,7(3):E 66. D O I :10.3390/dj7030066.[39]NISHIMOTO M,TANAKA T,TAKAHASHI K,et al. Oral frailtyis associated with food satisfaction in community-dwelling older adults[J]. Nihon Ronen Igakkai Zasshi,2020,57(3):273-281. DOI:10.3143/geriatrics.57.273.[40]HATANAKA Y,FURUYA J,SATO Y,et al. Associationsbetween oral hypofunction tests,age,and sex[J]. Int J Environ Res Public Health,2021,18(19):10256. DOI:10.3390/ijerph181910256.[41]OHARA Y,MOTOKAWA K,WATANABE Y,et al. Associationof eating alone with oral frailty among community-dwelling older adults in Japan[J]. Arch Gerontol Geriatr,2020,87:104014. DOI:10.1016/j.archger.2020.104014.(收稿日期:2022-03-10;修回日期:2022-05-06)(本文编辑:康艳辉)·新进展·合并危险因素的高钾血症诊治进展罗培艺,马良,苟慎菊*【摘要】 高钾血症是临床上的常见问题,其发生的危险因素包括患有肾脏疾病、心血管疾病、糖尿病以及服用影响血钾的药物等。
INODOROS Y MUEBLES DE BAÑO El presente certificado cubre cualquier defecto de fabricación del producto que pueda afectar su desempeño y en cumplimiento de lo establecido en el manual de instalación, uso y mantenimiento.CENTRO CERAMICO LAS FLORES SAC, certifica la garantía de los sanitarios de la marca Origin, Golden Bath, Bravat, Trébol Platinum y Noken según las siguientes condiciones: presentación del documento de venta, factura o boleta (original o virtual).2.Si presenta defectos de fabricación del producto y no por fallas que se presenten como consecuencia de lainadecuada instalación, reparación, uso o un mal mantenimiento. Garantía sólo cubre el valor del componente a cambiar, si fuese necesario se hará cambio total del producto, no cubre mano de obra traslado, ni otros gastos por remover o instalar productos.4.En caso se reportara alguna deficiencia del sanitario o mueble, comunicarse con el área de Servicio Técnico de CENTRO CERAMICO LAS FLORES SAC quien verificará a fin de establecer la idoneidad y calidad del producto. Garantía no contempla rajaduras ni roturas originadas por golpe que se den en el transporte y/o almacenaje inadecuado.Es muy importante que revise su mercadería al momento de recibirla dando su conformidad de la misma. Garantía fuera de Lima Metropolitana aplica en la ciudades donde se encuentran las tiendas Cassinelli; en otras ciudades la visita técnica a solicitud del cliente se realizará previa cotización y la validación del cliente a asumir el costo que demande la visita hacia la ciudad donde se encuentre instalado el producto. En caso el producto presentara falla de fabricación se aplicará lagarantía según el punto número 3, brindando la atención en tienda.7.No colocar peso excesivo y/o subirse sobre el mueble y tampoco sobre los inodoros.8.No exponer el producto directamente al sol.9.Evite contacto directo del agua o cualquier otro liquido sobre el mueble de baño para evitar la filtración en el interior de la estructura y se produzca hinchamiento.10.Evitar golpear bruscamente al cerrar las puertas y cajones.11.Evitar arrojar papeles dentro del inodoro.Tiempos de Cobertura:El presente certificado de garantía para one pieces, inodoros, bowls,lavatorios y muebles de baño tendrán vigencia a partir de la fecha de compra del producto indicado en su documento de compra.INFORMACION A CONSIDERARLos sanitarios presentarán curaciones ( parches ) en algunas zonas no visibles de la loza (inodoros o lavatorios).En el proceso de fabricación a altas temperaturas se hallan orificios que son necesarios para la ventilación y pequeñas fisuras que se dan en el proceso de enfriamiento tanto en la base interna y tanque del inodoro e igual forma por debajo del lavatorio.Tener en consideración que la loza sanitaria puede presentar defectos de esmalte permisible según norma técnica de fabricación.no debiendo ser visible a más de 1 metro de distancia de la zona frontal y lateral del producto.TREBOL PLATINUM ORIGIN De por Vida (defecto Fábrica)Accesorio interno (3 años)Complementos (1 año)GOLDEN BATH De por Vida (defecto Fábrica)Accesorio interno (1 año) Complementos(6 meses)3 años1 año Accesorio interno (3 años)Complementos (1 año)De por Vida (defecto Fábrica)LAVATARIOS Y MUEBLESMueble Golden Bath 3 años 6 mesesMueble Trébol Platinum 3 años1 año Mueble Origin CERTIFICADO DE GARANTIASANITARIOS Y MUEBLES DE BAÑOBRAVAT MARCA TABLEROGABINETE LOZA SANITARIA MARCA 2 años10 años NOKEN TIEMPOS DE GARANTIA DE SANITARIOS1 año5 años ACCESORIO Y COMPLEMENTOInspiramos el cambio en tuCERTIFICADO DE GARANTIA INODOROS Y MUEBLES DE BAÑOINSTALACION 1.Contratar personal calificado para la instalación de los productos adquiridos.2.Revisar los componentes del producto antes de la instalación (hacer lectura del manual propio de instalación o video brindado).3.Antes de instalar el inodoro se debe verificar si la medida del eje del punto de desagüe de piso o pared va acorde con la del inodoro,asimismo considerar la medida del punto de agua.4.El tubo de desagüe y la toma de agua, tanto interior como exterior, deben encontrarse libres de impurezas y residuos de cemento que puedan atorar el drenaje de agua.5.Para una instalación de inodoro al piso se debe marcar la posición de los pernos de anclaje en la superficie de forma paralela a la pared terminada, posteriormente perforar e instalar los pernos.6.Colocar el anillo de cera alrededor del desagüe del inodoro luego asentar el inodoro al piso y se asegura en la posición definitiva colocando los pernos de anclaje y ajustando a la loza. El anillo debe formar un sello entre el piso y el inodoro para evitar la salida de malos olores y filtraciones de agua. Hacer pruebas de funcionamiento antes de sellar. fijación del borde de la taza al piso o lavatorios de loza, debe realizarse con alguna silicona neutra para uso sanitario.Este producto permite desmontar la loza sin quebrarla (no usar productos que imposibiliten el desmontaje)MANTENIMIENTOSANITARIOS*Para la limpieza de la loza sanitaria no debe usarse materiales abrasivos como escobilla de alambre o similares que pueden deteriorar el acabado de la loza sanitaria.*Combinar los químicos de limpieza con agua antes de su aplicación para evitar dañar el acabado cromado de las bisagras decorativas,pulsadores o el mismo asiento.MUEBLES DE BAÑO*Los lavatorios de loza deben ser limpiados con agua y jabón líquido utilizando una esponja suave.*Posteriormente secar con un paño para evitar manchas.*Evitar acumulación de agua sobre el mueble, podría generar hinchamiento en su estructura..*Limpiar el mueble únicamente con paño ligeramente húmedo.*No aplicar detergentes y evitar la limpieza con esponjas abrasivas ya sea en el lavatorio y el propio mueble de baño.*Es importante la limpieza diaria de la loza para evitar manchas en la superficie (se recomienda utilizar bicarbonato de sodio y vinagre blanco para eliminar la mancha en la loza).Los datos personales que usted proporciona serán utilizados y/o tratados por Centro Cerámico Las Flores SAC estricta y únicamente a efectos de brindarle atención personalizada para la gestión de una posible solución del inconveniente reportado, así como para la acreditación de la atención del mismo. Centro Cerámico Las Flores SAC podrá compartir y/o usar y/o almacenar y/o transferir su información a terceras personas vinculadas o no a Centro Cerámico Las Flores SAC. sean estos socios comerciales o no de Centro Cerámico Las Flores SAC, con el objeto de realizar las actividades relacionadas a la atención post ventas y/o servicio técnico solicitado. Usted podrá ejercer en cualquier momento su derecho de información, acceso, rectificación, cancelación y oposición de sus datos de acuerdo a lo dispuesto por la Ley de Protección de Datos Personales, vigente y su Reglamento. Para más información en la web ó por correo a ********************************.Nombre de Titular de Compra ………………………………………………………………………………………………………………………………………………………………………..Numero de Boleta o Factura ………………………………………....……………………………………………………….Fecha de Compra …………………………………….DNI ………………………………………………………………………………………………………………………………..……….Firma ………………………………………………………….Nombre de Contacto ………………………………………………………………………………………………………..…….Firma de Contacto ………………………………………INSTALACION Y MANTENIMIENTO DE INODOROS Y MUEBLES DE BAÑOServicio Técnico CCLFTeléfono 0800-1-2150 / 241-5753 / 243-2242/ 241-5746Contáctenosvíaweb:******************************************************Inspiramos el cambio en tu hogar INSTALACION Y MATENIMIENTODE INODOROS Y MUEBLES DE BAÑO。
西⽅经典书⽬汇总1.《尼各马可伦理学(The Nicomachean Ethics of Aristotle)》作者:亚⾥⼠多德(Aristotle)分类号:2.《道德情操论(The Theory of Moral Sentiments)》作者:亚当·斯密(Adam Smith)分类号:B82-02/S6423.《论⼈类不平等的起源和基础(A Discourse on Inequality)》作者:卢梭()分类号:D081/R8644.《实践理性批判(Critique of Practical Reason)》作者:康德(Kant)分类号:K16 5.《道德形⽽上学基础(Foundations of the Metaphysics of Morals)》作者:康德(Kant)分类号:K166.《功利主义(Utilitarianism)》作者:穆勒(Mill)分类号:D081/R8647.《伦理学原理(Principia Ethica)》作者:摩尔()分类号:M8228.《正义论(A Theory of Justice)》作者:罗尔斯(John Rawls)分类号:9.《⽆政府、国家与乌托邦(Anarchy, State and Utopia)》作者:诺齐克(Robert Nozick)分类号:B82-069/N961 10.《追寻美德(After Virtue)》作者:麦⾦太尔(Alasdair MacIntyre)哲学:1.《理想国(Republic)》作者:柏拉图(Plato)分类号:P7182.《形⽽上学(Metaphysics)》作者:亚⾥⼠多德 (Aristotle)分类号:A7163.《诗学(On the Art of Poetry)》作者:亚⾥⼠多德 (Aristotle)分类号:A716 4.《诗艺(On the Art of Poetry)》作者:贺拉斯(Horace)分类号:A7165.《论崇⾼(On the Sublime Longinus)》作者:朗吉奴斯(Longinus)分类号:A716 6.《第⼀哲学沉思录(Meditations on First Philosophy)》作者:笛卡尔(Rene Descartes)分类号:D4457.《思想录(Pascal’s Penssees)》作者:帕斯卡尔(Blaise Pascal)分类号: D445 8.《⼈性论(A Treatise of Human Nature)》作者:休谟(David Hume)分类号:H921 9.《纯粹理性批判(Critique of Pure Reason)》作者:康德(Kant)分类号:K16 10.《判断⼒批判(Critique of Judgment)》作者:康德(Kant)分类号:K1611.《精神现象学(The Phenomenology of Mind)》, 作者:⿊格尔(Hegel)分类号:H462 12.《⼩逻辑(The Logic of Hegel)》作者:⿊格尔(Hegel)分类号:H46213.《作为意志和表象的世界(The World as Will and Representation)》, 作者:叔本华(Schopenhauer)分类号:S373 14.《查拉图斯特拉如是说(Thus Spake Zarathustra)》作者:尼采(Friedrich Nietzsche)分类号:N67715.《⾮此即彼(Either/Or)》, 作者:克尔凯郭尔(Kierkegaard)分类号:B534/K47 16.《普通语⾔学教程(Course in General Linguistics)》作者:索绪尔( Saussure)分类号:H0/S25517.《纯粹现象学导论(Ideas: General Introduction to Pure Phenomenology)》作者:胡塞尔(Edmund Husserl)分类号:H97218.《逻辑哲学论(Tractatus Logico Philosophicus)》作者:维特根斯坦()分类号:B521/W83119.《哲学研究(Philosophical Investigations)》作者:维特根斯坦()分类号:B521/W831 20.《存在与时间(Being and Time)》作者:海德格尔(Martin Heidegger)分类号:H465 21.《诗·语⾔·思(Poetry, Language, Thought)》作者:海德格尔(Martin Heidegger)分类号:H46522.《存在与虚⽆(Being and Nothingness)》作者:萨特(Jean-Paul Sartre)分类号:S24923.《真理与⽅法(Truth and Method)》作者:伽达默尔(Hans-Georg Gadamer)分类号:G12324.《科学⾰命的结构(The Structure of Scientific Revolutions)》作者:库恩()分类号:K9625.《性经验史(The History of Sexuality)》作者:福柯()分类号:K96宗教学:1.《忏悔录(Confessions)》作者:圣·奥古斯丁()分类号:《托马斯·阿奎那要籍选(Basic Writings of Saint ThomasAquinas)》, 作者:阿奎那( Aquinas)分类号:A6473.《迷途指津(The Guide for the Perplexed)》作者:马蒙尼德(Maimonides)分类号:B985/M2234.《路德基本著作选(Basic Theological Writings)》作者:马丁·路德(Martin Luther)分类号:L9735.《论宗教(On Religion)》作者:施莱尔马赫()分类号:B972/S3416.《我与你(I and Thou)》作者:马丁·布伯(Martin Buber)分类号:B972/S341 7.《⼈的本性及其命运(The Nature and Destiny of Man)》作者:尼布尔()分类号:B972/N6658.《神圣者的观念(The Idea of the Holy)》作者:奥托(Rudolf Otto)分类号:B972/O89 9.《存在的勇⽓(The Courage to Be)》作者:梯利希(Paul Tillich)分类号:B972/O89 10.《教会教义学(Church Dogmatics)》作者:卡尔·巴特(Karl Barth)分类号:B921/B284政治学:1.《政治学(The Politics of Aristotle)》作者:亚⾥⼠多德 (Aristotle)分类号:A7162.《君主论(The Prince)》作者:马基雅维⾥(Niccolo Machiavelli)分类号:D033/M149 3.《社会契约论(The Social Contract)》作者:卢梭()分类号:D033/M1494.《利维坦(Leviathan)》作者:霍布斯(Thomas Hobbes)分类号:D033/H682 5.《政府论(Two Treatises of Government)作者:洛克(John Locke)分类号:L814 6.《论法的精神(The Spirit of the Laws)》, 作者:孟德斯鸠(Montesquieu)分类号:M7797.《论美国民主(Democracy in America)》, 作者:托克维尔(Alexis de Tocqueville)分类号:T6328.《代议制政府(Considerations on RepresentativeGovernment)》作者:穆勒(Mill)分类号:D033/M6459.《联邦党⼈⽂集(The Federalist Papers)》作者:汉密尔顿(Alexander Hamilton)分类号:H21710.《⾃由秩序原理(The Constitution of Liberty)》作者:哈耶克()分类号:D089/H417经济学:1.《国民财富的性质和原因的研究(An Inquiry into the Nature and Causes of the Wealth of Nations)》, 作者:亚当·斯密(Adam Smith)分类号:S6422.《经济学原理(Principles of Economics)》, 作者:马歇尔(Alfred Marshall)分类号:M3673.《福利经济学(The Economics of Welfare)》, 作者:庇古()分类号:P633 4.《就业、利息与货币的⼀般理论(The General Theory of Employment Interest and Money)》作者:凯恩斯()分类号:K445.《经济发展理论(The Theory of Economic Development)》作者:熊彼特(Schumpeter)分类号:K446.《⼈类⾏为(Human Action: A Treatise on Economics)》, 作者:⽶塞斯(Mises)分类号:M6787.《经济分析的基础(Foundations of Economic Analysis)》作者:萨缪尔森(Samuelson)分类号:《货币数量理论研究(Studies in the Quantity Theory of Money)》作者:弗⾥德曼(Friedman)分类号:F8999.《集体选择与社会福利(Collective Choice and Social Welfare)》作者:阿玛蒂亚·森()分类号:F89910.《资本主义经济制度(The Economic Institutions of Capitalism)》作者:威廉姆森(Williamson)分类号:W729社会学:1.《论⾃杀(Suicide: A Study in Sociology)》作者:杜克海姆(Emilc Durkheim)分类号:D9472.《新教伦理与资本主义精神(The Protestant Ethic and the Spirit of Capitalism)》作者:韦伯(Max Weber)分类号:B920/W3753.《货币哲学(The Philosophy of Money)》作者:席美尔(Georg Simmel)分类号:C91-03/S5924.《⼀般社会学论集(A Treatise on General Sociology)》, 作者:帕累托(Vilfredo Pareto)分类号:C91-06/P227 5.《意识形态与乌托邦(Ideology and Utopia)》作者:曼海姆()分类号: M281⼈类学:1.《⾦枝(The Golden Bough)》作者:弗雷泽(James )分类号:B1/F8482.《西太平洋上的航海者(Argonauts of the Western Pacific)》作者:马林诺夫斯基()分类号:M2153.《原始思维(The Savage Mind)》作者:列维-斯特劳斯(Claude Levi-Strauss)分类号:B80/L6644.《原始社会的结构和功能(Structure and Function in Primitive Society)》作者:拉迪克⾥夫-布郎(Brown)分类号:B80/L6645.《种族、语⾔、⽂化(Race, Language and Culture)》作者:鲍斯(Franz Boas)分类号:C95/B662⼼理学:1.《⼼理学原理(The Principles of Psychology)》, 作者:威廉·詹姆⼠(William James)分类号:B84/J272.《⽣理⼼理学原理(Principles of Physiological Psychology)》作者:冯特()分类号:B845/W9653.《梦的解析(The Interpretation of Dreams)》作者:弗洛伊德(Sigmund Freud)分类号:B84-065/F8894.《⼉童智慧的起源(The Origin of Intelligence in the Child)》作者:⽪亚杰(Jean Piaget)分类号:P5795.《科学与⼈类⾏为(Science and Human behavior)》作者:斯⾦纳()分类号:B84-063/S628 6.《原型与集体⽆意识(The Archetypes and the Collective Unconscious)》作者:荣格()分类号:B84-065/J957.《动机与⼈格(Motivation and Personality)》作者:马斯洛()分类号:B84-067/M394法学:1.《古代法(Ancient Law)》作者:梅因()分类号:M2252.《英国法与⽂艺复兴(English Law and the Renaissance)》作者:梅特兰()分类号:M2253.《法理学讲演录(Lectures on Jurisprudence)》, 作者:奥斯丁()分类号:D90/A936 4.《法律的社会学理论(A Sociological Theory of Law)》作者:卢曼()分类号:D90-052/L9265.《法律社会学之基本原理(Fundamental Principles of the Sociology of Law)》作者:埃利希()分类号:D90-052/E33 6.《法律、宪法与⾃由(Law, Legislation and Liberty)》作者:哈耶克()分类号:7.《纯粹法学理论(Pure Theory of Law)》作者:凯尔森()分类号:D90/K298.《法律之概念(The Concept of Law)》作者:哈特()分类号:D90/K299.《法律之帝国(Law’s Empire)》作者:德沃⾦()分类号:D90/D98910.《法律的经济学分析(Economic Analysis of Law)》作者:波斯纳(Richard )分类号:D90-059/P855历史学:1.《历史(The Histories)》作者:希罗多德(Herodotus)分类号:K125/H5592.《伯罗奔尼撒战争史(The Peloponnesian War)》作者:修昔底德(Thucydides)分类号:K125/T5323.《编年史(The Annals of Imperial Rome)》作者:塔西陀(Tacitus)分类号:K126/T118 4.《上帝之城(The City of God)》, 作者:圣·奥古斯丁()分类号:B972/A923 5.《历史学:理论和实践(History: its Theory and Practice)》作者:克罗齐(Benedetto Croce)分类号:K01/C9376.《历史的观念(The Idea of History)》作者:柯林伍德()分类号:K01/C9377.《腓⼒普⼆世时代的地中海与地中海世界(The Mediterranean and the Mediterranean World in the Age of Philip II)》, 作者:布罗代尔()分类号:K503/B8258.《历史研究(A Study of History)》, 作者:汤因⽐()分类号:K01/T756商业经典书⽬:In Search of Excellence: Lessons from America's Best-Run Companies《追求卓越》:美国优秀企业的成功秘诀Built to Last: Successful Habits of Visionary Companies《基业长青》/《公司长寿秘诀》:⾼瞻远瞩公司长⽣不⽼的秘诀Reengineering the Corporation: A Manifesto for Business Revolution《公司再造》/《企业重组》:企业管理⾰命的宣⾔Barbarians at the Gate: The Fall of RJR Nabisco《⼤收购》/《门⼝的野蛮⼈》:华尔街股市兼并风潮Competitive Advantage: Creating and Sustaining Superior Performance《竞争优势》:寻找成功的⽀点The Tipping Point: How Little Things Can Make a Big Difference《引爆流⾏》:改变思维的佳作Crossing the Chasm: Marketing and Selling Technology Products to Mainstream Customers《跨越鸿沟》:⾼科技创新成功之道The House of Morgan《摩根财团》:美国⼀代银⾏王朝和现代⾦融业的崛起The Six Sigma Way《6σ管理法》:追求卓越的阶梯Seven Habits of Highly Effective People: Powerful Lessons in Personal Change 《强⼈的七种习性》:让你成为新强⼈Liar's Poker《说谎者的牌术》/《骗⼦游戏》:⼀幅扭曲的罪恶图景The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail《创新者的窘境》:⼤公司⾯对突破性技术时引发的失败Japan Inc.《Japan Inc.》:漫画⽇本经济Den of Thieves《股市⼤盗》/《贼巢》:华尔街最⼤内幕交易案始末The Essential Drucker《德鲁克精华》:⼤师中的⼤师精华中的精华Competing for the Future《竞争⼤未来》The Buffett Way: Investment Strategies of the World's Greatest Investor《沃伦?巴菲特之路》/《快餐式投资》:投资之王的理念与策略Jack: Straight from the Gut《杰克?韦尔奇⾃传》:⼀部CEO的圣经Good to Great: Why Some Companies Make the Leap... and Others Don't 《从优秀到卓越》:迈向成功的巅峰The New New Thing: A Silicon Valley Story《新新事物:硅⾕的故事》经济学经典书⽬:第1部《经济表》弗朗斯⽡·魁奈(法国1694—1774)第2部《国富论》亚当·斯密(英国1723—1790)第3部《⼈⼝原理》托马斯·罗伯特·马尔萨斯(英国1766—1834)第4部《政治经济学概论》让·巴蒂斯特·萨伊(法国1767—1832)第5部《政治经济学及赋税原理》⼤卫·李嘉图(英国1772—1823)第6部《政治经济学新原理》西蒙·德·西斯蒙第(法国1773—1842)第7部《政治经济学的国民体系》弗⾥德利希·李斯特(德国1789—1846)第8部《政治经济学原理》约翰·斯图亚特·穆勒(英国1806—1873)第9部《资本论》卡尔·马克思(德国1818—1883)第10部《政治经济学理论》威廉·斯坦利·杰⽂斯(英国1835—1882)第11部《国民经济学原理》卡尔·门格尔(奥地利1840—1921)第12部《纯粹政治经济学纲要》⾥昂·⽡尔拉斯(法国1834—1910)第13部《资本与利息》欧根·冯·庞巴维克(奥地利185l⼀1914)第14部《经济学原理》阿弗⾥德·马歇尔(英国1842—1924)第15部《利息与价格》克努特·维克塞尔(瑞典1851—1926)第16部《财富的分配》约翰·贝茨·克拉克(美国1847—1938)第17部《有闲阶级论》托尔斯坦·本德·凡勃伦(美国1857—1929)第18部《经济发展理论》约瑟夫·阿罗斯·熊彼特(奥地利1883—1950)第19部《福利经济学》阿瑟·赛西尔·庇古(英国1877—1959)第20部《不完全竞争经济学》琼·罗宾逊(英国1903—1983)第21部《就业、利息和货币通论》约翰·梅纳德·凯恩斯(英国1883—1946)第22部《价值与资本》约翰·理查德·希克斯(英国1904—1989)第23部《通往奴役之路》哈耶克(奥地利1899—1992)第24部《经济学》保罗·萨缪尔森(美国1915⼀)第25部《丰裕社会》约翰·肯尼斯·加尔布雷斯(美国1908—)第26部《经济成长的阶段》沃尔特·罗斯托(美国1916—)第27部《⼈⼒资本投资》西奥多·威廉·舒尔茨(美国1902—1998)第28部《资本上义与⾃由》⽶尔顿·弗⾥德曼(美国1912—)第29部《经济学》约瑟夫·斯蒂格利茨(美国1943—)第30部《经济学原理》格⾥⾼利·曼昆(美国1958—)第31部《商道》谋略经典:第1部《道德经》第2部《⿁⾕⼦》第3部《管⼦》第4部《论语》第5部《孙⼦兵法》第6部《荀⼦》第7部《韩⾮⼦》第8部《战国策》第9部《⼈物志》第10部《贞观政要》第11部《反经》第12部《资治通鉴》第13部《三国演义》第14部《菜根谭》第15部《智囊》第16部《三⼗六计》第17部《曾国藩家书》第18部《厚⿊学》第19部《君主论》第20部《战争论》管理类经典:第1部《科学管理原理》弗雷德⾥克·温斯洛·泰罗(美国1856—1915) 第2部《社会组织和经济组织理论》马克思·韦伯(德国1864—1920) 第3部《经理⼈员的职能》切斯特·巴纳德(美国1886—1961) 第4部《⼯业管理和⼀般管理》亨利·法约尔(法国1841-1925) 第5部《⼯业⽂明的社会问题》埃尔顿·梅奥(美国1880—1949) 第6部《企业中⼈的⽅⾯》道格拉斯·麦格雷⼽(美国1906—1964) 第7部《个性与组织》克⾥斯·阿吉⾥斯(美国1923—) 第8部《如何选样领导模式》罗伯特·坦南鲍姆(美国1915—2003) 第9部《管理决策新科学》赫伯特·西蒙(美国1916—2001) 第10部《伟⼤的组织者》欧内斯特·戴尔(美国1914—) 第11部《管理的新模式》伦西斯·利克特(美国1903—1981) 第12部《营销管理》菲利普·科特勒(美国1931—) 第13部《让⼯作适合管理者》弗雷德·菲德勒(美国1922—) 第14部《组织效能评价标准》斯坦利·E·西肖尔(美国1915—1999) 第15部《再论如何激励员⼯》弗雷德⾥克·赫茨伯格(美国1923—2000) 第16部《组织与管理系统⽅法与权变⽅法》弗⾥蒙特·卡斯特(美国1924—) 第17部《经理⼯作的性质》亨利·明茨伯格(加拿⼤1939—) 第18部《管理任务、责任、实践》彼得·杜拉克(美国1909—) 第19部《再论管理理论的丛林》哈罗德·孔茨(美国1908—1984) 第20部《杰克·韦尔奇⾃传》杰克·韦尔奇(美国1935—) 第21部《竞争战略》迈克尔·波特(美国1947—) 第22部《Z理论》威廉·⼤内(美国1943—) 第23部《转危为安》爱德华兹·戴明(美国1900—1993) 第24部《总经理》约翰·科特(美国1947—) 第25部《追求卓越》托马斯·彼得斯(美国1942—) 第26部《领导者成功谋略》沃伦·本尼斯(美国1925—) 第27部《巨⼈学舞》罗莎贝丝·摩丝·坎特(美国1943—) 第28部《第五项修炼》彼得·圣吉(美国1947—) 第29部《企业再造》迈克尔·汉默(美国1948—) 第30部《基业长青》詹姆斯·柯林斯(美国1958—) 第31部《杜拉克论管理》第32部《⾼效能⼈⼠的七个习惯》。
卢卡诺伯爵Pró1ogoEn el nombre de Dios.Amén.*Entre las maravillosas cosas que Dios ha hecho,está la cara del hombre.No hay dos caras similares en el mundo.Asimismo,º no hay dos hombres que tengan la misma voluntadº o inclinación.Sin embargo,hay una cosa en la que los hombres sí son similares,y es que aprenden mejor lo que más les interesa.De esta manera, el que quiera enseñar a otro alguna cosa deberá presentarla de una manera que le sea agradable para el que la aprende.Por esto yo,don Juan,hijo del principe don Manuel,escribí este libro con las palabras más hermosas que pude.Esto hice, siguiendo el ejemplo de los médicos,quienes ponen dulceº a las medicinas para que el dulzor arrastreºconsigo la medicina que beneficia.Así,el lector se deleitaráconºsus enseñanzas,y aunque no quiera,aprenderá su instrucción.Si por acasoºlos lectores encuentran algo mal expresado, no echen la culpaºsino a la falta de entendimiento de don Juan Manuel;y si por el contrario encuentran algo provechoso,agradézcanleº a Dios,el inspirador de los buenos dichos y las buenas obras.Y pues,ya terminado el pró1ogo,de aquí en adelanteº comienzan los cuentos.Hay que suponer que un gran sefior,el conde Lucanor,habla con Patronio,su conseiero.ºasimismo同样地,照样地voluntad意愿,意志ponen dulce加糖arrastre带来se deleitará con高兴;喜爱:取乐por acaso偶然,偶尔echen la culpa责备,指责agradézcanle感谢de...adelante从这里往后su consejero他的顾问En el nombre de Dios.Amén:以上帝的名义,阿门。
Known-item retrieval on broadcast TVJ.A. List, A.R. van Ballegooij, A.P. de Vries Information Systems (INS)INS-R0104 April 30, 2001Report INS-R0104ISSN 1386-3681CWIP.O. Box 940791090 GB AmsterdamThe NetherlandsCWI is the National Research Institute for Mathematics and Computer Science. CWI is part of the Stichting Mathematisch Centrum (SMC), the Dutch foundation for promotion of mathematics and computer scienceand their applications.SMC is sponsored by the Netherlands Organization for Scientific Research (NWO). CWI is a member of ERCIM, the European Research Consortium for Informatics and Mathematics.Copyright © Stichting Mathematisch Centrum P.O. Box 94079, 1090 GB Amsterdam (NL) Kruislaan 413, 1098 SJ Amsterdam (NL)Telephone +31 20 592 9333Telefax +31 20 592 4199Known-Item Retrieval on Broadcast TVJohan List,Alex van Ballegooij,Arjen de Vries{j.a.list,alex.van.ballegooij,arjen.de.vries}@cwi.nlCWIP.O.Box94079,1090GB Amsterdam,The NetherlandsABSTRACTMany content-based,multimedia retrieval systems are based on a feature-oriented approach to querying,mostly exposing afixed set of features(introduced at design time)for querying purposes.This restrictionto a limited set of features is problematic for two reasons:it restricts the expressiveness at the semanticlevel,and it seems unfeasible to obtain(a-priori)a sufficiently powerful set of features for all possiblequeries.We describe an alternative approach where users specify precisely the distinguishable characteristics of the desired result set.In this query process,the userfirst describes a representation of the content(basedon a feature or collection of features)and then tells the system how to apply the representation in thesearch.Our prototype video retrieval system allows the expression of such queries as a sequence of operations, on MPEG video and audio streams,that can be executed on our database system.While the low-leveldecompression stage is implemented in an imperative programming language,the actual retrieval approachis expressed in declarative database queries.We assessed this system with a case study in known-itemretrieval on broadcast video streams:detecting news bulletins in the stream,with the help of both audioand video information.2000Mathematics Subject Classification:68P20,68U991998ACM Computing Classification System:Information Search and Retrieval(H.3.3),Multimedia Infor-mation Systems(H.5.1)Keywords and Phrases:multimedia retrieval systems,multimedia database systems,known-item retrievalNote:Work carried out under INS1MIA project and the DRUID project and the report has been submittedas article to CBMI20011.I NTRODUCTIONThe most challenging area of multimedia retrieval research is closing the semantic gap,bining and mapping low level media-specific features in an intelligent way to high-level concepts.Many multimedia retrieval systems focus on a feature-oriented approach to querying.A single representation of the content of multimedia objects is decided upon a-priori.Because an optimal representation for all queries does not seem to exist,researchers attempted to use a series of representations(termed a‘society of models’in[1]), and select a good representation based on user feedback,see e.g.[1,2].Choosing the right representation automatically is very important for naive users with generic queries. But,guessing the right query representation from user feedback is a very difficult problem,and has not been solved to a satisfactory level.The question arises whether the user can play a larger role in the‘articulation’of the query than simply saying yes or no to retrieved objects.This paper describes an approach in which users define their information needs precisely in terms of the distinguishing characteristics of desired responses.The query does not necessarily depend on a single rep-resentation of content:it can be constructed from several representations,possibly containing information from different media types.The analogy in image retrieval is our work on the‘Image-Spotter’[3],allowing its users to designate the regions of interest in an image.Consider for example a user searching for video segments with a space shuttle in orbit over the Earth. Looking at the characteristics of such segments,the user may come up with the following distinguishing characteristics:2.Multimedia Retrieval Systems2Figure1:Example Space Shuttle Search Image–the narration track(if there is any present)is likely to contain words such as‘space shuttle’,‘orbit’,‘earth’;–there will probably be a large amount of dark colors present in such a segment;–the space shuttle is mostly white;–Earth itself is characterized as a collection of blue,green,brown and white colors;–there is a certain spatial relation between the space shuttle and the Earth.The user specifies the representations for(several of)the characteristics mentioned above,and instructs the system on how to use these representations to answer the query.Some of these characteristics may be complex and difficult to specify by hand,such as the shape of the space shuttle or the variety of colors present in the Earth.Then,an example image such as the one shown in Figure1serves as a reference for the system,to determine these characteristics automatically.We present a work in progress to support the query process sketched in the(hypothetical)example above. We designed and implemented a prototype video retrieval system,and tested its merits with a case study in known-item retrieval.This case study comprisedfinding the delimiting segments of Dutch news bulletins, from broadcast video streams containing commercials before the start and after the ending of the news bulletins.The structure of the rest of the paper is as follows.Section2discusses relevant work in thefield of multimedia retrieval systems,followed by Section3containing our approach.Sections4and5describe our prototype system architecture and the case study respectively.Sections6and7present our experimental results,conclusions and directions for future research.2.M ULTIMEDIA R ETRIEVAL S YSTEMSFor the answering of queries,many multimedia retrieval systems focus on content-based retrieval:the analysis and extraction of low-level,media-specific features,followed by a similarity search in the feature spaces available.We have looked at two specific cases of multimedia retrieval systems:video and audio retrieval systems.Thefirst video retrieval systems considered video retrieval as a special case of image retrieval.An example of such a system is QBIC[4].In QBIC,video data isfirst segmented at the shot level,after which key frames are chosen or generated(mosaic frames in the case of shots with panning motion).Object detection was achieved through analyzing the layered representation of video:the detection of regions with coherent motion.VideoQ[5]introduced the concept of video objects.Video objects are sets of regions which display some amount of consistency,under certain criteria,across several frames.As in QBIC,video data isfirst segmented at the shot level after which the global background motion of each shot is determined.Color-, edge-,and motion-information of regions within the shot is then extracted to track possible video objects across several frames.Query interfaces for multimedia retrieval can be distinguished into textual or visual query interfaces. Visual query interfaces mostly comprise query by feature or feature combinations(local or global features) and query by sketch or example.The VideoQ system allows the sketch of a motion trajectory for a query object as well,to capture the motion aspect present in video data,and a time duration for the searched video segment(either an absolute duration or an intuitive measure such as’long’,’medium’or’short’).3.Our Approach3Figure2:System ArchitectureAn interesting approach to querying video material,which comes closest to what we want to achieve, is described in[6]and is built on top of the VideoQ system.Querying is performed with semantic visual templates,i.e.a set of icons or example objects that form a representation of the semantic the user is searching for.An icon is an animated sketch,composed of graphical objects which resemble the real-world objects.For searching the system,features are extracted from the icons or example objects and a similarity search is performed.A similar content-based retrieval approach has been followed in audio retrieval research[7].Muscle-fish[8]extracts features such as loudness,pitch and brightness.In querying the audio database,the user can specify an example sound from which features are extracted and used during query processing. SpeechSkimmer[9]is a system developed for interactively browsing or skimming of speech.The system uses time compression processing to enable users to listen to speech several times the speed of real-time, based on the notion that human speech can be understood much quicker than the normal speech rate. Speech sounds can be segmented by using information about the speaker’s pitch or the use of speaker identification.An important feature of audio retrieval systems is the query by example feature,where an example sound segment is provided to the system for querying.Instead of using example sound segments,the system developed at the University of Waikato[10]uses acoustic input for querying:the user can hum a tune.Audiofiles present in the system are then analyzed for tunes similar to the hummed tune,using pitch extraction or string-matching algorithms(in the case of the Waikato system).3.O UR A PPROACHA problem is that in many content-based retrieval systems,a standard set of features is present(introduced at design time)for the similarity searches.We deem it impossible to determine a-priori a standard set of features which can be used for answering all queries possible.Moreover,there are only so many objects, concepts or relations to describe as the used feature space(combinations)allow to express.Most impor-tantly,it is impossible to a-priori determine which feature space best captures higher-level concepts for all possible domains,a standard feature-set may not be suitable to exploit available domain knowledge. Since domain specific search strategies generally result in better answers than generic approaches,this is an important issue.Based on the above,we also conjure that,in the case of a layered or integrated approach to multimedia retrieval(examples include[11,12,13]),the restrictiveness in feature spaces present causes restrictiveness at the higher levels.In our eyes,users can contribute to crossing the semantic gap with a top-down oriented approach,by taking advantage of the domain knowledge of the(expert)users themselves.A multimedia retrieval system should therefore offer the users the possibility to define their information need in terms of distinguishable characteristics of desired responses.Note that we do not completely dismiss content-based retrieval based on media features.At some point during the query process,media features and the accompanying similarity search will be necessary.We only question the manner in which features are used for retrieval in many systems today.To test and illustrate our approach,we built a prototype video retrieval system,of which the architecture is shown in Figure2.The lowest level of the system consists of the raw bit streams(such as video streams,4.System Architecture4 audio streams and other media)that are stored in a multimedia database system.The information that is effectively stored in the database system is the information needed to both de-compress and decode the audio and video data of the MPEG bit stream.The next abstraction level consists of media-specific analysis or feature extraction elements(such as frames from a video stream or sample data from an audio stream).We propose that the construction of the elements,in this second level of abstraction,the frames and sample data in itself can be regarded as a view on the low-level bit stream. The video frames and sample data form the basis for feature extraction and analysis,which in our system is written in a higher-level database interface language and consist of sequences of operations on database tables.This is what allows the exact feature extraction algorithms to be chosen at query-time,thus allowing for the most appropriate ones with respect to the domain to be used.The system as described here is not a complete video retrieval system.It should be considered as a framework consisting of basic operations that requires an actual retrieval model to be built by means of a query-definition for each separate retrieval problem.The abstract interface that is a result of the use of a database system allows such systems to be build relatively easily and quickly.Nevertheless it does place a larger burden on the users.In order to reduce this burden we plan to construct an intermediate layer between the system and the user.This layer will act like a domain-independent thesaurus.This thesaurus component can offer high level primitives(such as’list all speech segments in a certain stream’together with a description of the appropriate media-representations needed for answering this sub-query).The prototype system as it stands now is aimed at expert users with knowledge of both video-and audio processing.Our intent is to extend the system further for making it usable by ordinary users(see Section7).4.S YSTEM A RCHITECTUREFor storage of the information needed for both decompression and decoding of the video-and audio streams (see Figure2),we used the Monet database system[14].The main advantage of using database systems for information retrieval task is that they“offerflexible declarative specification means and provide efficiency in execution,through algebraic optimization in the mapping process from specification to query execution plan”[15].4.1The Monet Database SystemThe Monet database system is a main-memory,parallel database system developed at CWI[14].Its data model is based on Binary Association Tables(BATs),which are two-column tables consisting of a head and tail value.Monet offers aflexible algebra to manipulate the BATs present.The design goals of Monet included high performance and extensibility.High performance is gained through the execution of operations in main memory,the exploitation of inter-operation parallelism,the frequent use of bulk operations which optimizes cache usage and a simple data model.Extensibility is offered through modules,in which application programmers can define new data types(called atoms), commands,operators,and procedures.Procedures are written in MIL(Monet Interface Language)which is a higher-level language defined for user-level interaction with the database.At this point,MIL plays the role of a higher-level specification language.We plan to introduce a layer on top of MIL with a query language more suited for the multimodal access and will then be used as an intermediate layer for communication between user interface components and the retrieval system as a whole.4.2Multimedia ExtensionsFor our experiments an MPEG video and an MPEG audio module were implemented in C.Decoding of the video and audio streams were implemented as commands whereas much of the feature extraction and analysis operations were written in MIL.The video module consists of a set of database commands that allow both the decompression and if needed decoding of a given range of frames from an MPEGfile.This is needed since we want users to have access to all data in the video stream.We consider storing complete decompressed video streams unfeasible because of the enormous amount of data present in such a stream.A rough calculation indicated that10 hours of video(approximately5GB of MPEG I data)requires500GB for storage of the decompressed video,audio and basic indexing information.The database commands allow users to access both compressed-domain1and image-domain data to search in the video stream.Analogous to the video module,the audio module primarily consists of a set of database commands that allow both the decompression and decoding of a given range of audio frames from the audio stream in an MPEGfile.Even though the size of audio-data is only a fraction of the video data present in a stream many of the same database-size restrictions hold and the storage of all decompressed data is unfeasible.In addition to these decompression and decoding functions the audio module contains a set of standard signal processing functions that users can use to extract interesting features from the audio data.The nature of the system encourages(expert)users to define custom algorithms,but we assume certain operations to be of such importance that optimized implementations are provided in the form of database operations.Examples of such optimized operations are color space conversion and histogram construction, which are relatively low-level and computationally expensive operations.5.C ASE S TUDYIn the case study used to illustrate our approach,we focused on a specific known-item retrieval problem in broadcast news,namely tofind the starting and ending time indices of news broadcasts.This specific case study was chosen to complement an automated news recording system that is in use for the construction of a data warehouse of news broadcasts[16].Until now,the determination of the exact beginning and end of each recorded news broadcast had to be done by hand.5.1Characteristics of News BroadcastsNews broadcasts are relatively structured and are often characterized by the following properties:Specific”delimiter”parts Broadcast news programs often start with a specific frame or set of frames with a constant layout,see Figure3.In our case thefirst frame of the opening sequence shows the NOS2logo on a black background.The last frame of the end sequence shows a globe surrounded by some graphics and a caption that announces the NOS website[17].A similar argument can be made for the audio part of the broadcast.Recognizable tunes are played during the start and end sequences.Additionally,the start and end sequences are accompanied by short periods of silence. Speech-richness of broadcast news News broadcasts often contains mostly speech,whereas the surround-ing commercials are characterized by a more rapid mix of both speech and music.Also,commercial segments appear louder to the human ear in comparison to speech signals.High cut frequency of commercial material Commercial material is often allocated a certain slot of time during which a certain amount of commercials is shown.The average duration of a commercial is 30seconds in which all product-related information must be placed.Hence,we can expect a higher level of action(or cut frequency)during commercials.Note that these characteristics can be collected by watching a collection of news broadcasts.5.2Audio and Video Feature ExtractionGiven the characteristics of our domain as described in Section5.1,we chose to use a number of suitable feature extraction and matching algorithms.Note that we chose these specific features because we found them to be applicable to this specific problem,the system in no way prescribes or limits us to the use of the features presented here.In choosing suitable feature extraction and matching algorithms,we focused on the the specific delimiter parts of Dutch news broadcast.Also note that two media types(video and audio) are used in analyzing the streams.The main idea is that,given the problem,simple,media specific features are sufficient for solving the problem when combined during retrieval.1What is usually referred to as’compressed domain’data is in reality already decompressed.MPEG video data is conceptually decoded in two stages.Firstly Huffman decompression of the bit stream results in motion vectors,and DCT domain pixel descriptions, this is the data usually referred to as compressed-domain.Secondly this data can be used to construct the actual images contained in the video stream.2NOS stands for”Nederlandse Omroep Stichting”,the Dutch broadcast organization.Figure 3:The begin ∗and end frames of a news broadcast ∗Note that the box depicted in the begin image indicates the bounding box used in the image matching step and is notpart of the actual image.The first step of the algorithm is the analysis of the audio stream,and specifically the occurrence of high energy parts in the signal.The delimiter parts of news broadcasts are accompanied by specific tunes that are played.Both these tunes show up in the analysis as periods of high energy.The time index generated during analysis of the audio stream is used by the video module to prune the video stream search space.The frame segments surrounding the high energy periods are decompressed and decoded and analyzed for the occurrence of the specific delimiter frames.Audio Features Feature extraction of the audio signal uses compressed MPEG audio data [18].Using compressed MPEG audio data for audio analysis has proven to be an efficient method giving good results compared with analysis on time-domain waveform data [19,20].MPEG audio compression is based on perceptual audio coding,which uses the characteristics of the human auditory system in order to gain higher compression ratios.In order to analyze the audio signal,the approach in [20]was followed where a mean sub band vector is calculated for each frame.A frame is a collection of sub band vectors,each specifying the spectral content of 32raw input samples.In Layer II MPEG audio a frame consists of 3sub frames,each containing 12groups of 32-element sub band sample vectors.A mean sub band vector can then be calculated as follows:M [i ]= i =1S t [i ]3∗12,i =1..32The collection of mean sub band vectors is then further analyzed on energy content,which indicates the loudness of the frame,and is used in detecting periods with high energy in the audio signal.Note that the concept of high energy can indicate a variety of possibilities.A high energy segment can include music,loud speech,or loud background sounds.So care must be taken not to assume too much on the basisvar beginImages := shots.getKeyFrames_First(fname).rankImages("beginImage_2001");var endImages := shots.getKeyFrames_Last(fname).rankImages("endImage_2001");var intervals := high_energy_thresh(high_energy(rmsValues), int(10), dbl(0.85));var rmsValues := [rms](mean_vectors(decodedStream);var decodedStream := decode_audio_stream(fname);var shots := bat(int, int);# fname contains filename of MPEG stream to be examinedintervals@batloop { shots.insert(getShotsForTimeSlice(fname, $h, $t); }var begin := beginImages.reverse.fetch(0);var end := endImages.reverse.fetch(0);printf("The news starts at frame %d and ends at frame %d\n", begin, end);Figure 4:(a)Query-Graph for start and end detection.(b)A snippet of the corresponding MIL query.of high energy information,als analysis windows must be chosen with care to ensure the best possible classification.Analogous to the high energy variant,periods with low energy in the audio stream can be detected.Low energy periods can indicate silence segments.Video Features The video data consists of MPEG-I video streams[21].The smallest unit in a video stream is a frame,in other words a video stream is a sequence of images.In order to provide some structure in a large video stream it is useful to segment the stream into meaningful sections.There are two levels of video segmentation,thefirst level is the shot level,this essentially constitutes a piece of video shot by a camera in one consecutive go.A higher level of grouping could be managed in the form of scenes,scenes are groups of shots constituting a semantic unit,for example a distinct part of a story.A simple algorithm forfinding the specific delimiter parts(see Subsection5.1)in the video stream is as follows.First the video stream is segmented at shot level.The collection of detected shots is then analyzed further to give us key frames.The key frames are then given to an image comparison operation,where each key frame is compared with a given image tofind the one that matches best.Indexing the videos at shot level is good enough for rough video segmentation.Shot level segmentation can be achieved in a number of ways,amongst which a pair-wise image comparison of consecutive frames based on color histograms[22].Given the fact that in our problem domain,we are only looking for the hard cuts(the delimiter frames are sufficiently different from surrounding frames)the computationally rel-atively inexpensive approach of comparing image color histograms suffices.Histogram based cut-detection performs equally well to other more advanced cut-detection algorithms when hard-cuts are the subject of detection[23].Although certain color spaces perform better than others empirical examination of the prob-lem domain has shown that using only the luminance component of the images suffices to detect the fast majority of cuts in the data[24].More importantly,this relatively simple approach has shown to have near-zero miss rate in our data-set.Of course,there is a trade-off against a higher false-hit rate.The second step comprises key frame selection.Once again,for this problem the simplest approach is best,since we are either looking for the beginning of the news broadcast(which always starts at the beginning of a shot)or the end of the news broadcast(which ends at the end of a shot).Simply selecting either thefirst or last frame of a shot as the key frame is exactly what we want.The third step is image comparison.Since we exactly know what we are looking for and the start and end sequence is always the same computer generated animation,a basic pixel-wise image comparison function is adequate.The exact algorithm measures the difference between corresponding pixels on a macro block level.For the ranking of the possible start frames only the macro blocks contained in the bounding box depicted in Figure3are used essentially realizing a spatial constraint.For the ranking of the possible end frames the complete image is used.The difference between macro blocks is calculated as follows:MB diff(a,b)= 256i=1min((a i−b i)2255,100)The differences are combined in a vector,one dimension per macro-block.The length of this vector is used to express the difference between two images.6.R ESULTSThe test-set used for the experiments was composed of several news broadcasts recorded at different times during the day over a period of a few days,stored in MPEG-1format.The video content was encoded in standard PAL format,25frames of352x288pixels per second.The audio content was encoded with MPEG Layer II compression at44.1KHz16bit sampling.Thefirst algorithm we used was segmentation of the video data at shot level,followed by selection of key frames and a full-frame comparison of the key frames and example images.Although this approach has high precision,it is also very computationally expensive.We attempted to realize a speed-up in searching the video stream by using the low energy information from the audio stream.Assuming that we would be able tofind the silences that are present at the beginning and end of respectively the start and end sequence of the news broadcast.The introduction of the energy content analysis of the audio stream cut the execution time with roughly a factor of2.Although we were able to identify the silences we are interested in,we noticed that a large number of short silences is present in the test set.Thus,the percentage of video material that has to be examined even after only。