中英文文献翻译对照
- 格式:doc
- 大小:128.00 KB
- 文档页数:12
企业资金管理中英文对照外文翻译文献(文档含英文原文和中文翻译)An Analysis of Working Capital Management Results Across IndustriesAbstractFirms are able to reduce financing costs and/or increase the fund s available for expansion by minimizing the amount of funds tied upin current assets. We provide insights into the performance of surv eyed firms across key components of working capital management by usi ng the CFO magazine’s annual Working CapitalManagement Survey. We discover that significant differences exist b etween industries in working capital measures across time.In addition.w e discover that these measures for working capital change significantl y within industries across time.IntroductionThe importance of efficient working capital management is indisputa ble. Working capital is the difference between resources in cash or readily convertible into cash (Current Assets) and organizational commi tments for which cash will soon be required (Current Liabilities). Th e objective of working capital management is to maintain the optimum balance of each of the working capital components. Business viabilit y relies on the ability to effectively manage receivables. inventory.a nd payables. Firms are able to reduce financing costs and/or increase the funds available for expansion by minimizing the amount of funds tied up in current assets. Much managerial effort is expended in b ringing non-optimal levels of current assets and liabilities back towa rd optimal levels. An optimal level would be one in which a balance is achieved between risk and efficiency.A recent example of business attempting to maximize working capita l management is the recurrent attention being given to the applicatio n of Six Sigma®methodology. Six S igma®methodologies help companies measure and ensure quality in all areas of the enterprise. When used to identify and rectify discrepancies.inefficiencies and erroneous tra nsactions in the financial supply chain. Six Sigma®reduces Days Sale s Outstanding (DSO).accelerates the payment cycle.improves customer sati sfaction and reduces the necessary amount and cost of working capital needs. There appear to be many success stories including Jennifertwon’s(2002) report of a 15percent decrease in days that sales are outstanding.resulting in an increased cash flow of approximately $2 million at Thibodaux Regional Medical Cenrer.Furthermore bad debts declined from 3.4millin to $6000000.However.Waxer’s(2003)study of multiple firms employing Six Sig ma®finds that it is really a “get rich slow”technique with a r ate of return hovering in the 1.2 – 4.5 percent range.Even in a business using Six Sigma®methodology. an “optimal”level of working capital management needs to be identified. Industry factors may impa ct firm credit policy.inventory management.and bill-paying activities. S ome firms may be better suited to minimize receivables and inventory. while others maximize payables. Another aspect of “optimal”is the extent to which poor financial results can be tied to sub-optimal pe rformance.Fortunately.these issues are testable with data published by CFO magazine. which claims to be the source of “tools and informati on for the financial executive.”and are the subject of this resear ch.In addition to providing mean and variance values for the working capital measures and the overall metric.two issues will be addressed in this research. One research question is. “are firms within a p articular industry clustered together at consistent levels of working capital measures?For instance.are firms in one industry able to quickl y transfer sales into cash.while firms from another industry tend to have high sales levels for the particular level of inventory . The other research question is. “does working capital management perform ance for firms within a given industry change from year-to-year?”The following section presents a brief literature review.Next.the r esearch method is described.including some information about the annual Working Capital Management Survey published by CFO magazine. Findings are then presented and conclusions are drawn.Related LiteratureThe importance of working capital management is not new to the f inance literature. Over twenty years ago. Largay and Stickney (1980) reported that the then-recent bankruptcy of W.T. Grant. a nationwide chain of department stores.should have been anticipated because the co rporation had been running a deficit cash flow from operations for e ight of the last ten years of its corporate life.As part of a stud y of the Fortune 500s financial management practices. Gilbert and Rei chert (1995) find that accounts receivable management models are used in 59 percent of these firms to improve working capital projects.wh ile inventory management models were used in 60 percent of the compa nies.More recently. Farragher. Kleiman and Sahu (1999) find that 55 p ercent of firms in the S&P Industrial index complete some form of a cash flow assessment. but did not present insights regarding account s receivable and inventory management. or the variations of any curre nt asset accounts or liability accounts across industries.Thus.mixed ev idence exists concerning the use of working capital management techniq ues.Theoretical determination of optimal trade credit limits are the s ubject of many articles over the years (e.g. Schwartz 1974; Scherr 1 996).with scant attention paid to actual accounts receivable management.Across a limited sample. Weinraub and Visscher (1998) observe a tend ency of firms with low levels of current ratios to also have low l evels of current liabilities. Simultaneously investigating accounts rece ivable and payable issues.Hill. Sartoris.and Ferguson (1984) find diffe rences in the way payment dates are defined. Payees define the date of payment as the date payment is received.while payors view paymen t as the postmark date.Additional WCM insight across firms.industries.a nd time can add to this body of research.Maness and Zietlow (2002. 51. 496) presents two models of value creation that incorporate effective short-term financial management acti vities.However.these models are generic models and do not consider uni que firm or industry influences. Maness and Zietlow discuss industry influences in a short paragraph that includes the observation that. “An industry a company is located in may have more influence on th at company’s fortunes than overall GNP”(2002. 507).In fact. a car eful review of this 627-page textbook finds only sporadic information on actual firm levels of WCM dimensions.virtually nothing on industr y factors except for some boxed items with titles such as. “Should a Retailer Offer an In-House Credit Card”(128) and nothing on WC M stability over time. This research will attempt to fill this void by investigating patterns related to working capital measures within industries and illustrate differences between industries across time.An extensive survey of library and Internet resources provided ver y few recent reports about working capital management. The most relev ant set of articles was Weisel and Bradley’s (2003) article on cash flow management and one of inventory control as a result of effect ive supply chain management by Hadley (2004).Research Method The CFO RankingsThe first annual CFO Working Capital Survey. a joint project with REL Consultancy Group.was published in the June 1997 issue of CFO (Mintz and Lezere 1997). REL is a London. England-based management co nsulting firm specializing in working capital issues for its global l ist of clients. The original survey reports several working capital b enchmarks for public companies using data for 1996. Each company is ranked against its peers and also against the entire field of 1.000 companies. REL continues to update the original information on an a nnual basis.REL uses the “cash flow from operations”value located on firm cash flow statements to estimate cash conversion efficiency (CCE). T his value indicates how well a company transforms revenues into cash flow. A “days of working capital”(DWC) value is based on the d ollar amount in each of the aggregate.equally-weighted receivables.inven tory.and payables accounts. The “days of working capital”(DNC) repr esents the time period between purchase of inventory on acccount fromvendor until the sale to the customer.the collection of the receiva bles. and payment receipt.Thus.it reflects the companys ability to fin ance its core operations with vendor credit. A detailed investigation of WCM is possible because CFO also provides firm and industry val ues for days sales outstanding (A/R).inventory turnover.and days payabl es outstanding (A/P).Research FindingsAverage and Annual Working Capital Management Performance Working capital management component definitions and average values for the entire 1996 –2000 period .Across the nearly 1.000 firms in the survey.cash flow from operations. defined as cash flow from operations divided by sales and referred to as “cash conversion ef ficiency”(CCE).averages 9.0 percent.Incorporating a 95 percent confide nce interval. CCE ranges from 5.6 percent to 12.4 percent. The days working capital (DWC). defined as the sum of receivables and invent ories less payables divided by daily sales.averages 51.8 days and is very similar to the days that sales are outstanding (50.6).because the inventory turnover rate (once every 32.0 days) is similar to the number of days that payables are outstanding (32.4 days).In all ins tances.the standard deviation is relatively small.suggesting that these working capital management variables are consistent across CFO report s.Industry Rankings on Overall Working Capital Management Perfo rmanceCFO magazine provides an overall working capital ranking for firms in its ing the following equation:Industry-based differences in overall working capital management are presented for the twenty-s ix industries that had at least eight companies included in the rank ings each year.In the typical year. CFO magazine ranks 970 companies during this period. Industries are listed in order of the mean ove rall CFO ranking of working capital performance. Since the best avera ge ranking possible for an eight-company industry is 4.5 (this assume s that the eight companies are ranked one through eight for the ent ire survey). it is quite obvious that all firms in the petroleum in dustry must have been receiving very high overall working capital man agement rankings.In fact.the petroleum industry is ranked first in CCE and third in DWC (as illustrated in Table 5 and discussed later i n this paper).Furthermore.the petroleum industry had the lowest standar d deviation of working capital rankings and range of working capital rankings. The only other industry with a mean overall ranking less than 100 was the Electric & Gas Utility industry.which ranked secon d in CCE and fourth in DWC. The two industries with the worst work ing capital rankings were Textiles and Apparel. Textiles rank twenty-s econd in CCE and twenty-sixth in DWC. The apparel industry ranks twenty-third and twenty-fourth in the two working capital measures ConclusionsThe research presented here is based on the annual ratings of wo rking capital management published in CFO magazine. Our findings indic ate a consistency in how industries “stack up”against each other over time with respect to the working capital measures.However.the wor king capital measures themselves are not static (i.e.. averages of wo rking capital measures across all firms change annually); our results indicate significant movements across our entire sample over time. O ur findings are important because they provide insight to working cap ital performance across time. and on working capital management across industries. These changes may be in explained in part by macroecono mic factors Changes in interest rates.rate of innovation.and competitio n are likely to impact working capital management. As interest rates rise.there would be less desire to make payments early.which would stretch accounts payable.accounts receivable.and cash accounts. The ra mifications of this study include the finding of distinct levels of WCM measures for different industries.which tend to be stable over ti me. Many factors help to explain this discovery. The improving econom y during the period of the study may have resulted in improved turn over in some industries.while slowing turnover may have been a signal of troubles ahead. Our results should be interpreted cautiously. Our study takes places over a short time frame during a generally impr oving market. In addition. the survey suffers from survivorship bias –only the top firms within each industry are ranked each year and the composition of those firms within the industry can change annua lly.Further research may take one of two lines.First.there could bea study of whether stock prices respond to CFO magazine’s publication of working capital management rating.Second,there could be a study of which if any of the working capital management components relate to share price performance.Given our results,there studies need to take industry membership into consideration when estimating stock price reaction to working capital management performance.对整个行业中营运资金管理的研究格雷格Filbeck.Schweser学习计划托马斯M克鲁格.威斯康星大学拉克罗斯摘要:企业能够降低融资成本或者尽量减少绑定在流动资产上的成立基金数额来用于扩大现有的资金。
中英文对照外文翻译(文档含英文原文和中文翻译)Create and comprehensive technology in the structure globaldesign of the buildingThe 21st century will be the era that many kinds of disciplines technology coexists , it will form the enormous motive force of promoting the development of building , the building is more and more important too in global design, the architect must seize the opportunity , give full play to the architect's leading role, preside over every building engineering design well. Building there is the global design concept not new of architectural design,characteristic of it for in an all-round way each element not correlated with building- there aren't external environment condition, building , technical equipment,etc. work in coordination with, and create the premium building with the comprehensive new technology to combine together.The premium building is created, must consider sustainable development , namely future requirement , in other words, how save natural resources as much as possible, how about protect the environment that the mankind depends on for existence, how construct through high-quality between architectural design and building, in order to reduce building equipment use quantity andreduce whole expenses of project.The comprehensive new technology is to give full play to the technological specialty of every discipline , create and use the new technology, and with outside space , dimension of the building , working in coordination with in an all-round way the building component, thus reduce equipment investment and operate the expenses.Each success , building of engineering construction condense collective intelligence and strength; It is intelligence and expectation that an architect pays that the building is created; The engineering design of the building is that architecture , structure , equipment speciality compose hardships and strength happenning; It is the diligent and sweat paid in design and operation , installation , management that the construction work is built up .The initial stage of the 1990s, our understanding that the concept of global design is a bit elementary , conscientious to with making some jobs in engineering design unconsciously , make some harvest. This text Hangzhou city industrial and commercial bank financial comprehensive building and Hangzhou city Bank of Communications financial building two building , group of " scientific and technological progress second prize " speak of from person who obtain emphatically, expound the fact global design - comprehensive technology that building create its , for reach global design outstanding architect in two engineering design, have served as the creator and persons who cooperate while every stage design and even building are built completely.Two projects come into operation for more than 4 years formally , run and coordinate , good wholly , reach the anticipated result, accepted and appreciated by the masses, obtain various kinds of honor .outstanding to design award , progress prize in science and technology , project quality bonus , local top ten view , best model image award ,etc., the ones that do not give to the architect and engineers without one are gratified and proud. The building is created Emphasizing the era for global design of the building, the architects' creation idea and design method should be broken through to some extent, creation inspirations is it set up in analysis , building of global design , synthesize more to burst out and at the foundation that appraise, learn and improve the integration capability exactly designed in building , possess the new knowledge system and thinking method , merge multi-disciplinary technology. We have used the new design idea in above-mentioned projects, have emphasized the globality created in building .Is it is it act as so as to explain to conceive to create two design overview and building of construction work these now.1) The financial comprehensive building of industrial and commercial bank of HangZhou,belong to the comprehensive building, with the whole construction area of 39,000 square meters, main building total height 84, 22, skirt 4 of room, some 6 storeys, 2 storeys of basements.Design overall thinking break through of our country bank building traditional design mode - seal , deep and serious , stern , form first-class function, create of multi-functional type , the style of opening , architecture integrated with the mode of the international commercial bank.The model of the building is free and easy, opened, physique was made up by the hyperboloid, the main building presented " the curved surface surrounded southwards ", skirt room presents " the curved surface surrounded northwards ", the two surround but become intension of " gathering the treasure ".Building flourishing upwards, elevation is it adopt large area solid granite wall to design, the belt aluminium alloy curtain wall of the large area and some glass curtain walls, and interweave the three into powerful and vigorous whole , chase through model and entity wall layer bring together , form concise , tall and straight , upward tendency of working up successively, have distinct and unique distinctions.Building level and indoor space are designed into a multi-functional type and style of opening, opening, negotiate , the official working , meeting , receiving , be healthy and blissful , visit combining together. Spacious and bright two storeys open in the hall unifiedly in the Italian marble pale yellow tone , in addition, the escalator , fountain , light set off, make the space seem very magnificent , graceful and sincere. Intelligent computer network center, getting open and intelligent to handle official business space and all related house distribute in all floor reasonably. Top floor round visit layer, lift all of Room visit layer , can have a panoramic view of the scenery of the West Lake , fully enjoy the warmth of the nature. 2) The financial building of Bank of Communications of Hangzhou, belong to the purely financial office block, with the whole construction area of 19,000 square meters, the total height of the building is 39.9 meters, 13 storeys on the ground, the 2nd Floor. Live in building degree high than it around location , designer have unique architectural appearance of style architectural design this specially, its elevation is designed into a new classical form , the building base adopts the rough granite, show rich capability , top is it burn granite and verticality bar and some form aluminum windows make up as the veneer to adopt, represent the building noble and refined , serious personality of the bank.While creating in above-mentioned two items, besides portraying the shape of the building and indoor space and outside environment minister and blending meticulously, in order to achieve the outstanding purpose of global design of the building , the architect , still according to the region and project characteristic, put forward the following requirement to every speciality:(1) Control the total height of the building strictly;(2) It favorable to the intelligent comfortable height of clearances to create; (3) Meet thefloor area of owner's demand;(4)Protect the environment , save the energy , reduce and make the investment;(5) Design meticulously, use and popularize the new technology; (6)Cooperate closely in every speciality, optimization design.Comprehensive technologyThe building should have strong vitality, there must be sustainable development space, there should be abundant intension and comprehensive new technology. Among above-mentioned construction work , have popularized and used the intelligent technology of the building , has not glued and formed the flat roof beam of prestressing force - dull and stereotyped structure technology and flat roof beam structure technology, baseplate temperature mix hole , technology of muscle and base of basement enclose new technology of protecting, computer control STL ice hold cold air conditioner technology, compounding type keeps warm and insulates against heat the technology of the wall , such new technologies as the sectional electricity distribution room ,etc., give architecture global design to add the new vitality of note undoubtedly.1, the intelligent technology of the buildingIn initial stage of the 1990s, the intelligent building was introduced from foreign countries to China only as a kind of concept , computer network standard is it soon , make information communication skeleton of intelligent building to pursue in the world- comprehensive wiring system becomes a kind of trend because of 10BASE-T. In order to make the bank building adapt to the development of the times, the designer does one's utmost to recommend and design the comprehensive wiring system with the leading eyes , this may well be termed the first modernized building which adopted this technical design at that time.(1) Comprehensive wiring system one communication transmission network, it make between speech and data communication apparatus , exchange equipment and other administrative systems link to each other, make the equipment and outside communication network link to each other too. It include external telecommunication connection piece and inside information speech all cable and relevant wiring position of data terminal of workspace of network. The comprehensive wiring system adopts the products of American AT&T Corp.. Connected up the subsystem among the subsystem , management subsystem , arterial subsystem and equipment to make up by workspace subsystem , level.(2) Automated systems of security personnel The monitoring systems of security personnel of the building divide into the public place and control and control two pieces of systemequipment with the national treasury special-purposly synthetically.The special-purpose monitoring systems of security personnel of national treasury are in the national treasury , manage the storehouse on behalf of another , transporting the paper money garage to control strictly, the track record that personnel come in and go out, have and shake the warning sensor to every wall of national treasury , the camera, infrared microwave detector in every relevant rooms, set up the automation of controlling to control.In order to realize building intellectuality, the architect has finished complete indoor environment design, has created the comfortable , high-efficient working environment , having opened up the room internal and external recreation space not of uniform size, namely the green one hits the front yard and roofing, have offered the world had a rest and regulated to people working before automation is equipped all day , hang a design adopt the special building to construct the node in concrete ground , wall at the same time.2, has not glued and formed the flat roof beam of prestressing force- dull and stereotyped structure technology and flat roof beam structure technologyIn order to meet the requirement with high assurance that the architect puts forward , try to reduce the height of structure component in structure speciality, did not glue and form the flat roof beam of prestressing force concrete - dull and stereotyped structure technology and flat roof beam structure technology after adopting.(1) Adopt prestressing force concrete roof beam board structure save than ordinary roof beam board concrete consumption 15%, steel consumption saves 27%, the roof beam reduces 300mm high.(2) Adopt flat roof beam structure save concrete about 10% consumption than ordinary roof beam board, steel consumption saves 6.6%, the roof beam reduces 200mm high.Under building total situation that height does not change , adopt above-mentioned structure can make the whole building increase floor area of a layer , have good economic benefits and social benefit.3, the temperature of the baseplate matches muscle technologyIn basement design , is it is it is it after calculating , take the perimeter to keep the construction technology measure warm to split to resist to go on to baseplate, arrange temperature stress reinforcing bar the middle cancelling , dispose 2 row receives the strength reinforcing bar up and down only, this has not only save the fabrication cost of the project but also met the basement baseplate impervious and resisting the requirement that splits.4, the foundation of the basement encloses and protects the new technology of design and operationAdopt two technological measures in enclosing and protecting a design:(1) Cantilever is it is it hole strength is it adopt form strengthen and mix muscle technology to design to protect to enclose, save the steel and invite 60t, it invests about 280,000 to save.(2) Is it is it protect of of elevation and keep roof beam technology to enclose , is it protect long to reduce 1.5m to enclose all to reduce, keep roof beam mark level on natural ground 1.5m , is it is it protect of lateral pressure receive strength some height to enclose to change, saving 137.9 cubic meters of concrete, steel 16.08t, reduces and invests 304,000 yuan directly through calculating.5, ice hold cold air conditioner technologyIce hold cold air conditioner technology belong to new technology still in our country , it heavy advantage that the electricity moves the peak and operates the expenses sparingly most. In design, is it ice mode adopt some (weight ) hold mode of icing , is it ice refrigeration to be plane utilization ratio high to hold partly to hold, hold cold capacity little , refrigeration plane capacity 30%-45% little than routine air conditioner equipment, one economic effective operational mode.Hold the implementation of the technology of the cold air conditioner in order to cooperate with the ice , has used intelligent technology, having adopted the computer to control in holding and icing the air conditioner system, the main task has five following respects:(1) According to the demand for user's cold load , according to the characteristic of the structure of the electric rate , set up the ice and hold the best operation way of the cold system automatically, reduce the operation expenses of the whole system;(2) Fully utilize and hold the capacity of the cold device, should try one's best to use up all the cold quantity held basically on the same day;(3) Automatic operation state of detection system, ensure ice hold cold system capital equipment normal , safe operation;(4) Automatic record parameter that system operate, display system operate flow chart and type systematic operation parameter report form;(5) Predict future cooling load, confirm the future optimization operation scheme.Ice hold cold air conditioner system test run for some time, indicate control system to be steady , reliable , easy to operate, the system operates the energy-conserving result remarkably.6, the compounding type keeps in the wall warm and insulates against heat To the area of Hangzhou , want heating , climate characteristic of lowering the temperature in summer in winter, is it protect building this structural design person who compound is it insulate against heat the wall to keep warm to enclose specially, namely: Fit up , keep warm , insulate against heat the three not to equal to the body , realize building energy-conservation better.Person who compound is it insulate against heat wall to combine elevation model characteristic , design aluminium board elevation renovation material to keep warm, its structure is: Fill out and build hollow brick in the frame structure, do to hang the American Fluorine carbon coating inferior mere aluminium board outside the hollow brick wall.Aluminium board spoke hot to have high-efficient adiabatic performance to the sun, under the same hot function of solar radiation, because the nature , color of the surface material are different from coarse degree, whether can absorb heat have great difference very , between surface and solar radiation hot absorption system (α ) and material radiation system (Cλ ) is it say to come beyond the difference this. Adopt α and Cλ value little surface material have remarkable result , board α、Cλ value little aluminium have, its α =0.26, Cλ =0.4, light gray face brick α =0.56, Cλ =4.3.Aluminium board for is it hang with having layer under air by hollow brick to do, because aluminium board is it have better radiation transfer to hot terms to put in layer among the atmosphere and air, this structure is playing high-efficient adiabatic function on indoor heating too in winter, so, no matter or can well realize building energy-conservation in winter in summer.7, popularize the technology of sectional electricity distribution roomConsider one layer paves Taxi " gold " value , the total distribution of the building locates the east, set up voltage transformer and low-voltage distribution in the same room in first try in the design, make up sectional electricity distribution room , save transformer substation area greatly , adopt layer assign up and down, mixing the switchyard system entirely after building up and putting into operation, the function is clear , the overall arrangement compactness is rational , the systematic dispatcher is flexible . The technology have to go to to use and already become the model extensively of the design afterwards.ConclusionThe whole mode designed of the building synthetically can raise the adaptability of the building , it will be the inevitable trend , environmental consciousness and awareness of saving energy especially after strengthening are even more important. Developing with the economy , science and technology constantly in our country, more advanced technology and scientific and technical result will be applied to the building , believe firmly that in the near future , more outstanding building global design will appear on the building stage of our country. We will be summarizing, progressing constantly constantly, this is that history gives the great responsibility of architect and engineer.译文:建筑结构整体设计-建筑创作和综合技术21世纪将是多种学科技术并存的时代,它必将形成推动建筑发展的巨大动力,建筑结构整体设计也就越来越重要,建筑师必须把握时机,充分发挥建筑师的主导作用,主持好各项建筑工程设计。
中英文资料外文翻译Internet of Things1.the definition of connotationThe English name of the Internet of Things The Internet of Things, referred to as: the IOT.Internet of Things through the pass, radio frequency identification technology, global positioning system technology, real-time acquisition of any monitoring, connectivity, interactive objects or processes, collecting their sound, light, heat, electricity, mechanics, chemistry, biology, the location of a variety of the information you need network access through a variety of possible things and things, objects and people in the Pan-link intelligent perception of items and processes, identification and management. The Internet of Things IntelliSense recognition technology and pervasive computing, ubiquitous network integration application, known as the third wave of the world's information industry development following the computer, the Internet. Not so much the Internet of Things is a network, as Internet of Things services and applications, Internet of Things is also seen as Internet application development. Therefore, the application of innovation is the core of the development of Internet of Things, and 2.0 of the user experience as the core innovation is the soul of Things.2.The meaning of "material"Where the "objects" to meet the following conditions can be included in the scope of the "Internet of Things":1. Receiver have the appropriate information;2. Have a data transmission path;3. Have a certain storage capabilities;4. T o have the CPU;5.T o have the operating system;6. Have specialized applications;7. Have a data transmitter;8. Follow the communication protocol of Things;9. World Network, a unique number that can be identified.3. "Chinese style" as defined inInternet of Things (Internet of Things) refers to is the ubiquitous (Ubiquitous) terminal equipment (Devices) and facilities (Facilities), including with the "inner intelligence" sensors, mobile terminals, industrial systems, floor control system, the family of Intelligentfacilities, video surveillance systems, and external can "(Enabled), such as RFID, a variety of assets (the Assets), personal and vehicle carrying the wireless terminal" intelligent objects or animals "or" smart dust "(the Mote), through a variety of wireless and / or cable over long distances and / or short-range communication networks to achieve interoperability (M2M), application integration (the Grand Integration), and based on cloud computing, SaaS operation mode, in internal network (intranet), private network (e xtranet), and / or the Internet (Internet) environment, the use of appropriate information security mechanisms to provide a safe, controlled and even personalized real-time online monitoring, retrospective positioning, alarm linkage, command and control plan management, remote control, security, remote repair and maintenance, online upgrades, statistical reporting, decision support, the leadership of the desktop (showcase of the Cockpit Dashboard) management and service functions, "Everything," "efficient, energy saving, security environmental protection, "" possession, control, Camp integration [1].4.EU definitionIn September 2009, the Internet of Things and enterprise environments held in Beijing, China-EU Seminar on the European Commission and Social Media Division RFID Division is responsible for Dr. Lorent Ferderix, given the EU's definition of things: the Internet of Things is a dynamic global network infrastructure, it has a standards-based and interoperable communication protocols, self-organizing capabilities, including physical and virtual "objects" of identity, physical attributes, virtual features and smart interface and seamless integration of information networks . Internet of Things Internet and media, the Internet and business Internet one, constitute the future of the Internet.5.changeThe Internet of Things (Internet of Things) the word universally recognized at home and abroad Ashton, Professor of the MIT Auto-ID Center in 1999 first proposed to study RFID. The report of the same name released in 2005, the International T elecommunication Union (ITU), the definition and scope of the Internet of Things has been a change in the coverage of a larger expansion, no longer refers only to the Internet of Things based on RFID technology.Since August 2009, Premier Wen Jiabao put forward the "Experience China" Internet of Things was officially listed as a national one of the five emerging strategic industries, to write the "Government Work Report" Internet of Things in China has been the great concern of the society as a whole degree of concern is unparalleled in the United States, European Union, as well as other countries.The concept of Internet of Things is not so much a foreign concept, as it has been the concept of a "Made in China", his coverage of the times, has gone beyond the scope of the 1999 Ashton professor and the 2005 ITU report referred to, Internet of Things has been labeled a "Chinese style" label.6.BackgroundThe concept of Internet of Things in 1999. Internet-based, RFID technology and EPC standards, on the basis of the computer Internet, the use of radio frequency identification technology, wireless data communication technology, a global items of information to real-time sharing of the physical Internet "Internet of things" (referred to as the Internet of Things) , which is also the basis of the first round of the China Internet of Things boom set off in 2003.The sensor network is built up based on sensing technology network. Chinese Academy of Sciences in 1999 on the start sensor network research and has made some achievements in scientific research, the establishment of applicable sensor network.1999, held in the United States, mobile computing and networking International Conference, "The sensor network is a development opportunity facing humanity in the next century. In 2003, the United States, "T echnology Review" proposed sensor network technology will be future changes ten people's lives first.November 17, 2005, the WSIS held in Tunis (WSIS), the International T elecommunication Union released ITU Internet Report 2005: Internet of Things ", citing the concept of the" Internet of things ". The report pointed out that the ubiquitous "Internet o f Things" communication era is approaching, all the objects in the world, from tires to toothbrushes, from housing to the tissue via the Internet, take the initiative to be exchanged. Radio Frequency Identification (RFID), sensor technology, nanotechnology, intelligent embedded technology will be more widely used.According to the description of the ITU, the era of things, a short-range mobile transceivers embedded in a variety of daily necessities, human beings in the world of information and communication will receive a new communication dimension, from any time communication between people of the place of connection extended to the communication connection between persons and things and things and things. The Internet of Things concept of the rise, largely due to the International T elecommunication Union (ITU), the title of Internet of Things 2005 annual Internet Report. However, the ITU report the lack of a clear definition of Things.Domestic Internet of Things is also there is no single standard definition, but the Internet of Things In essence, the Internet of Things is a polymer application of modern information technology to a certain stage of development and technological upgrading of various sensing technology modern network technology and artificial intelligence and automation technology aggregation and integration of applications, so that the human and material wisdom of dialogue to create a world of wisdom. Because the development of the Internet of Things technology, involving almost all aspects of IT, innovative application and development of a polymer, systematic, and therefore be called revolutionary innovation of information industry. Summed up the nature of the Internet of Things is mainly reflected in three aspects: First, the Internet features that need to be networked objects must be able to achieve the interoperability of the Internet; identification and communication features, that is included in the Internet of Things "objects" must to have the functions of automatic identification and physical objects communication (M2M); intelligent features, the network system should have automated, self-feedback and intelligent control features January 28, 2009, Obama became the President of the United States, held with U.S.business leaders a "round table", as one of the only two representatives, IBM CEO Sam Palmisano for the first time that "the wisdom of the Earth" this concept, it is recommended that the new government to invest in a new generation of intelligent infrastructure.February 24, 2009 news, IBM Greater China CEO money crowd called "Smarter Planet" strategy announced in the forum 2009IBM.This concept was put forth, that is the great concern of the United States from all walks of life, and even analysts believe that IBM's vision is very likely to rise to U.S. national strategy, and caused a sensation in the world. IBM believes that the industry, the next phase of the mission is to make full use of the new generation of IT technology in all walks of life among specifically, is the embedded sensors and equipment to the power grid, railways, bridges, tunnels, highways, buildings, water supply systems dams, oil and gas pipelines and other objects, and is generally connected to the formation of Things.Strategy conference, IBM, and implant the concept of "wisdom" in the implementation of the infrastructure, strong, not only in the short term to stimulate the economy, promote employment, and in a short period of time for China to build a mature wisdom infrastructure platform.IBM "Smarter Planet" strategy will set off again after the wave of Internet technology industrial revolution. Former IBM CEO Lou Gerstner has raised an important point of view, every 15 years, a revolution in computing model. This judgment is the same as Moore's Law accurately call it a "15-year cycle Law". Before and after 1965, changes to the mainframe as a symbol, 1980 marked by the popularization of personal computers, 1995, the Internet revolution. Each such technological change are caused by the enterprise, industry and even the national competitive landscape of major upheaval and change. T o a certain extent in the Internet revolution is ripening by the "information superhighway" strategy. 1990s, the Clinton administration plan for 20 years, $ 200 billion to -4000 billion, construction of the U.S. National Information Infrastructure, to create a huge economic and social benefits.T oday, the "Smarter Planet" strategy by many Americans that there are many similarities with the "information superhighway", the same they revive the economy, a key strategy for competitive advantage. The strategy can be set off, not only for the UnitedStates, such as the Internet revolution was the wave of technological and economic concern, more attention from the world."Internet of Things prospects are very bright, it will dramatically change our current way of life." Demonstration director of the Center of Nanjing University of Aeronautics and Astronautics, National Electrical and Electronic Zhao Guoan said. Industry experts said that the Internet of things to our life personification of the things became a kind of human.Goods (goods) in the world of physical objects associated with each other "exchange", without the need for human intervention. The Internet of Things using radio frequency identification (RFID) technology, to achieve the interconnection and sharing of the automatic identification of goods (products) and information through the computer Internet. It can be said that the Internet of Things depict the world is full of intelligent. In the world of Internet of Things, material objects connected to the dragnet.The second session, held at Peking University in November 2008, China Mobile Government Seminar "Knowledge Society and Innovation 2.0", the experts made the mobile technology, the Internet of Things technology led to the development of economic and social form, innovative forms of change, and promote the The next generation of innovation for the knowledge society as the core of user experience (innovative 2.0) the formation of innovation and development of the form to pay more attention to the user to focus on people-oriented. Research institutions is expected to 10 years, the Internet of Things may be mass adoption of this technology will develop into one of thousands of yuan-scale high-tech market, the industry than the Internet 30 times.It is learned that the things industry chain can be broken down into the identity, perception, processing and information transfer, four links, each link of the key technologies for the wireless transmission network of RFID, sensors, smart chip and telecom operators. EPOSS in the "Internet of Things in 2020" report, an analysis predicted that the future development of the Internet of Things will go through four stages, 2010, RFID is widely used in the field of logistics, retail and pharmaceutical objects interconnect 2010 to 2015, 2015 ~ In 2020, the object into the semi-intelligent, intelligent objects into 2020.As the vanguard of the Internet of Things, RFID has become the most concerned about the technology market. The data show that the global RFID market size in 2008 from$ 4.93 billion in 2007 rose to $ 5.29 billion, this figure covers all aspects of the RFID market, including tags, readers and other infrastructure, software and services. RFID card and card-related infrastructure will account for 57.3 percent of the market, reaching $ 3.03 billion. Application from financial and security industries will drive the market growth of RFID cards. Analysys International forecasts, the Chinese RFID market size in 2009 will reach 5.0 billion, a CAGR of 33%, in which the electronic tag is more than 3.8 billion yuan, the reader close to 700 million yuan, software and services market to reach 500 million yuan pattern.MEMS is the abbreviation of the micro-electromechanical systems, MEMS technology is built on the basis of micro / nano, the market prospect is broad. The main advantage of the MEMS sensor is the small size, large-scale mass production cost reduction, mainly used in two major areas of automoti ve and consumer electronics. Under ICInsight the latest report is expected in 2007-2012, global sales of semiconductor sensors and actuators based on MEMS will reach 19 percent compound annual growth rate (CAGR), compared with $ 4.1 billion in 2007 to five years will achieve $ 9.7 billion in annual sales. 7.PrincipleInternet of Things is on the basis of the computer Internet, RFID, wireless data communications technology, to construct a cover everything in the world's "Internet of Things". In this network, the goods (products) to each other "exchange", without the need for human intervention. Its essence is the use of radio frequency identification (RFID) technology to achieve the interconnection and sharing of the automatic identification of goods (products) and information through the computer Internet.The Internet of Things is a very important technology is radio frequency identification (RFID) technology. RFID is radio frequency identification (Radio Frequency Identification) technology abbreviation, is an automatic identification technology in the 1990s began to rise, the more advanced a non-contact identification technology. The development of RFID technology based on a simple RFID system, combined with existing network technology, database technology, middleware technology, to build a one composed by a large number of networked readers and numerous mobile label, much larger than the Internet of Things trend.RFID, It is able to let items "speak" a technique. In the "Internet of Things" concept, RFID tags are stored in the specification and interoperability information collected automatically by wireless data communications network to a central information system, to achieve the identification of goods (products), and then through the open computer network for information exchange and sharing, items "transparent" management.The information technology revolution in the Internet of Things is referred to as IT mobile Pan of a specific application. Internet of Things through IntelliSense, identification technology and pervasive computing, ubiquitous network convergence applications, breaking the conventional thinking before, human beings can achieve ubiquitous computing and network connectivity [3]. The traditional thinking has been the separation of physical infrastructure and IT infrastructure: on the one hand, airports, roads, buildings, while on the other hand, the data center, PC, broadband. In the era of the "Internet of Things", reinforced concrete, cable with the chip, broadband integration into a unified infrastructure, in this sense, the infrastructure is more like a new site of the Earth, the world really works it, which including economic management, production operation, social and even personal life. "Internet of Things" makes it much more refined and dynamic management of production and life, to manage the future of the city to achieve the status of "wisdom" to improve resource utilization and productivity levels, and improve the relationship between man and nature.8.Agency1, institution-buildingAs the first national Internet of Things industry community organizations - the application of professional Committee of China Electronic Chamber of Things technology products (referred to as: "objects of the IPCC"), the Ministry of Civil Affairs in June 2010, preliminary approved by the Ministry of August being reported that the Ministry of Civil Affairs for final approval.2, the main taskServe as a bridge between business and government to assist the Government of the industry guidance, coordination, consultation and services to help members to reflect the business requirements to the Government; coordinate the relationship between enterprisesto strengthen technical cooperation, product distribution, the elimination of vicious competition ; supervision of members the correct implementation of national laws and regulations, to regulate the industry; member of information communication technology products, cooperation, resource sharing, capital operation, and promote the app lication of Internet of Things technologies and products, and promote the Internet of Things industrial scale , co-development.9.ConstructionInternet of Things in the practical application to carry out requires the involvement of all walks of life, and need the guidance of the national government as well as related regulations and policies to assist the launching of the Internet of Things has the scale, broad participation, management, technical, and material properties, etc. other features, the technical problem is the most crucial issues of Things billion Bo logistics consulting, Internet of Things technology is an integrated technology, a system not yet which company has overall responsibility for network planning and construction of the entire system, theoretical studies have commenced in all walks of life and the practical application is limited to within the industry. The key is on the planning and design and research and development of the Internet of Things research in the field of RFID, sensors, embedded software, and transmission of data calculation. In general, to carry out the steps of the Internet of things mainly as follows:(1) identified the object attributes, properties, including static and dynamic properties of the static property can be stored directly in the label, the dynamic properties need to start with sensors to detect real-time;(2) the need to identify the equipment to complete the reading of object attributes, and information into a data format suitable for network transmission;(3) the object of information transmitted over the network to the information processing center (processing center may be distributed, such as home computers or mobile phones, may also be centralized, such as China Mobile IDC) by the processing center to complete the object communication calculation.10.key areasInternet of Things 4 key areas:(1) RFID;(2) sensor network;(3) The M2M;(4) integration of the two.11.TrendIndustry experts believe that the Internet of things on the one hand can improve economic efficiency and significant cost savings; the other hand, can provide technical impetus to global economic recovery. Currently, the United States, the European Union are all invested heavily in-depth study to explore the Internet of Things. The country is also highly concerned about the emphasis of Things, Industry and Information T echnology Ministry in conjunction with the relevant departments are conducting research in a new generation of IT to the formation of policies and measures to support the development of a new generation of IT.China Mobile CEO Wang Jianzhou has repeatedly mentioned the Internet of Things will become the focus of future development of China Mobile. He will be invited to T aiwan to produce RFID, sensors and bar code manufacturers and China Mobile. According to him, the use of the Internet of Things technology, Shanghai Mobile has a number of industrial customers tailor the data collection, transmission, processing and business management in one set of wireless application solutions. The latest data show that Shanghai Mobile has more than 100,000 chips mounted on a taxi, bus, various forms of matter networking applications in all walks of prowess, to ensure the orderly operation of the city. During the Shanghai World Expo, "the bus services through" will be fully applied to the Shanghai public transport system, the smooth flow traffic to the most advanced technology to protect Expo area; for logistics transportation management, e-logistics ", will provide users with real-time accurate information of Cargo, vehicle tracking and positioning, the transport path selection, logistics network design and optimization services greatly enhance the comprehensive competitiveness of logistics enterprises.In addition, the popularization of the "Internet of Things" for the number of animals, plants and machinery, sensors and RFID tags of items and related interface devices will greatly exceed the number of mobile phones. The promotion of the Internet of Things willbecome a drive to promote economic development for the industry to open up a potential development opportunities. According to the current demand on the Internet of Things, in recent years, billions of sensors and electronic tags, which will greatly promote the production of IT components, while increasing the number of job opportunities.According to reports, it is necessary to truly build an effective Internet of things, there are two important factors. First, the scale, only with the scale to make the items of intelligence play a role. For example, a city of one million vehicles, if we only 10000 vehicles installed on the smart system, it is impossible to form an intelligent transportation system; two mobility items are usually not static, but in the state of the movement , we must maintain the items in the state of motion, and even high-speed motion state can at any time for dialogue.FORRESTER of the authority of the U.S. advisory body predicted that 2020, the world of business of the Internet of Things, compared with the business of interpersonal communication, will reach 30 to 1, so the "Internet of Things" is known to be the next one trillion communications services.Internet of Things heat wave Why is rapidly growing in China? Internet of Things in China rapid rise thanks to the several advantages of our country in terms of things.In the early 1999 launched the Internet of Things core sensor network technology research, R & D level in the world; the second, sensor network field in the world, China is the standard one of the dominant country, the patent owner; third China is one of the countries to achieve a complete industrial chain of Things; Fourth, China's wireless communications network and broadband coverage provides a solid infrastructure to support the development of the Internet of Things; Fifth, China has become the world's first the three major economies, with strong economic strength to support the development of the Internet of Things.12.MythThe current understanding of the Internet of things there are a lot of misunderstanding, which is also a direct impact on our understanding of Things on the development of the logistics industry, it is necessary first to distinguish errors, clarify our thinking.One sensor networks or RFID network equivalent of Things. The fact that sensortechnology, or RFID technology, or are simply one of the information collection technology. In addition to the sensor technology and RFID technology, GPS, video recognition, infrared, laser, scanning can be achieved automatically identify physical objects to communicate technical information collection technology can become the Internet of Things. Sensor networks or RFID network is just an application of Things, but not all of Things.Second, the Internet of Things as a myriad of unlimited extension of the Internet of Things as a completely open for all things, all of the interconnections, all shared Internet platform.In fact, the Internet of Things is not simple infinite extension of the global sharing of the Internet. Even if the Internet is also not only refers to we typically think of the international sharing computer network, Internet, WAN and LAN. Internet of Things can be both an extension of our usual sense of the Internet to the matter; LAN, professional can also be based on real needs and industrial applications. The reality is not necessary and can not make all the items networking; no need to make professional, LAN must be connected to the global Internet sharing platform. Of things in the future the Internet will be very different from the professional network of similar smart logistics, smart transportation, smart grid; the intelligence community and other local area network is the largest use of space.T er, that the ubiquitous network of the Internet of Things Internet of Things, and therefore the Internet of Things is a castle in the air, is difficult to achieve the technology. In fact the Internet of things are real, many of the primary Internet of Things applications already for our services. The Internet of Things concept is introduced in many real-world applications based on polymeric integrated innovation, pre-existing network with the Internet of Things, intelligent, automated system, summarized and upgrading it upgraded from a higher perspective our knowledge.Four of Things as a basket, and everything installed inside; based on self-awareness, and only be able to interact, communication products as the Internet of Things applications. For example, just embedded some of the sensors, to become the so-called Internet of Things appliances; products labeled with RFID tags, became the Internet of Things applications.es。
智能交通系统中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Traffic Assignment Forecast Model Research in ITS IntroductionThe intelligent transportation system (ITS) develops rapidly along with the city sustainable development, the digital city construction and the development of transportation. One of the main functions of the ITS is to improve transportation environment and alleviate the transportation jam, the most effective method to gain the aim is to forecast the traffic volume of the local network and the important nodes exactly with GIS function of path analysis and correlation mathematic methods, and this will lead a better planning of the traffic network. Traffic assignment forecast is an important phase of traffic volume forecast. It will assign the forecasted traffic to every way in the traffic sector. If the traffic volume of certain road is too big, which would bring on traffic jam, planners must consider the adoption of new roads or improving existing roads to alleviate the traffic congestion situation. This study attempts to present an improved traffic assignment forecast model, MPCC, based on analyzing the advantages and disadvantages of classic traffic assignment forecast models, and test the validity of the improved model in practice.1 Analysis of classic models1.1 Shortcut traffic assignmentShortcut traffic assignment is a static traffic assignment method. In this method, the traffic load impact in the vehicles’ travel is not considered, and the traffic impedance (travel time) is a constant. The traffic volume of every origination-destination couple will be assigned to the shortcut between the origination and destination, while the traffic volume of other roads in this sector is null. This assignment method has the advantage of simple calculation; however, uneven distribution of the traffic volume is its obvious shortcoming. Using this assignment method, the assignment traffic volume will be concentrated on the shortcut, which isobviously not realistic. However, shortcut traffic assignment is the basis of all theother traffic assignment methods.1.2 Multi-ways probability assignmentIn reality, travelers always want to choose the shortcut to the destination, whichis called the shortcut factor; however, as the complexity of the traffic network, thepath chosen may not necessarily be the shortcut, which is called the random factor.Although every traveler hopes to follow the shortcut, there are some whose choice isnot the shortcut in fact. The shorter the path is, the greater the probability of beingchosen is; the longer the path is, the smaller the probability of being chosen is.Therefore, the multi-ways probability assignment model is guided by the LOGIT model:∑---=n j ii i F F p 1)exp()exp(θθ (1)Where i p is the probability of the path section i; i F is the travel time of thepath section i; θ is the transport decision parameter, which is calculated by the followprinciple: firstly, calculate the i p with different θ (from 0 to 1), then find the θwhich makes i p the most proximate to the actual i p .The shortcut factor and the random factor is considered in multi-ways probabilityassignment, therefore, the assignment result is more reasonable, but the relationshipbetween traffic impedance and traffic load and road capacity is not considered in thismethod, which leads to the assignment result is imprecise in more crowded trafficnetwork. We attempt to improve the accuracy through integrating the several elements above in one model-MPCC.2 Multi-ways probability and capacity constraint model2.1 Rational path aggregateIn order to make the improved model more reasonable in the application, theconcept of rational path aggregate has been proposed. The rational path aggregate,which is the foundation of MPCC model, constrains the calculation scope. Rationalpath aggregate refers to the aggregate of paths between starts and ends of the trafficsector, defined by inner nodes ascertained by the following rules: the distancebetween the next inner node and the start can not be shorter than the distance betweenthe current one and the start; at the same time, the distance between the next innernode and the end can not be longer than the distance between the current one and theend. The multi-ways probability assignment model will be only used in the rationalpath aggregate to assign the forecast traffic volume, and this will greatly enhance theapplicability of this model.2.2 Model assumption1) Traffic impedance is not a constant. It is decided by the vehicle characteristicand the current traffic situation.2) The traffic impedance which travelers estimate is random and imprecise.3) Every traveler chooses the path from respective rational path aggregate.Based on the assumptions above, we can use the MPCC model to assign thetraffic volume in the sector of origination-destination couples.2.3 Calculation of path traffic impedanceActually, travelers have different understanding to path traffic impedance, butgenerally, the travel cost, which is mainly made up of forecast travel time, travellength and forecast travel outlay, is considered the traffic impedance. Eq. (2) displaysthis relationship. a a a a F L T C γβα++= (2)Where a C is the traffic impedance of the path section a; a T is the forecast traveltime of the path section a; a L is the travel length of the path section a; a F is theforecast travel outlay of the path section a; α, β, γ are the weight value of that threeelements which impact the traffic impedance. For a certain path section, there aredifferent α, β and γ value for different vehicles. We can get the weighted average of α,β and γ of each path section from the statistic percent of each type of vehicle in thepath section.2.4 Chosen probability in MPCCActually, travelers always want to follow the best path (broad sense shortcut), butbecause of the impact of random factor, travelers just can choose the path which is ofthe smallest traffic impedance they estimate by themselves. It is the key point ofMPCC. According to the random utility theory of economics, if traffic impedance is considered as the negativeutility, the chosen probability rs p of origination-destinationpoints couple (r, s) should follow LOGIT model:∑---=n j jrs rs bC bC p 1)exp()exp( (3) where rs p is the chosen probability of the pathsection (r, s);rs C is the traffic impedance of the path sect-ion (r, s); j C is the trafficimpedance of each path section in the forecast traffic sector; b reflects the travelers’cognition to the traffic impedance of paths in the traffic sector, which has reverseratio to its deviation. If b → ∞ , the deviation of understanding extent of trafficimpedance approaches to 0. In this case, all the travelers will follow the path whichis of the smallest traffic impedance, which equals to the assignment results withShortcut Traffic Assignment. Contrarily, if b → 0, travelers ’ understanding error approaches infinity. In this case, the paths travelers choose are scattered. There is anobjection that b is of dimension in Eq.(3). Because the deviation of b should beknown before, it is difficult to determine the value of b. Therefore, Eq.(3) is improvedas follows:∑---=n j OD j OD rsrs C bC C bC p 1)exp()exp(,∑-=n j j OD C n C 11(4) Where OD C is the average of the traffic impedance of all the as-signed paths; bwhich is of no dimension, just has relationship to the rational path aggregate, ratherthan the traffic impedance. According to actual observation, the range of b which is anexperience value is generally between 3.00 to 4.00. For the more crowded cityinternal roads, b is normally between 3.00 and 3.50.2.5 Flow of MPCCMPCC model combines the idea of multi-ways probability assignment anditerative capacity constraint traffic assignment.Firstly, we can get the geometric information of the road network and OD trafficvolume from related data. Then we determine the rational path aggregate with themethod which is explained in Section 2.1.Secondly, we can calculate the traffic impedance of each path section with Eq.(2),Fig.1 Flowchart of MPCC which is expatiated in Section 2.3.Thirdly, on the foundation of the traffic impedance of each path section, we cancalculate the respective forecast traffic volume of every path section with improvedLOGIT model (Eq.(4)) in Section 2.4, which is the key point of MPCC.Fourthly, through the calculation processabove, we can get the chosen probability andforecast traffic volume of each path section, but itis not the end. We must recalculate the trafficimpedance again in the new traffic volumesituation. As is shown in Fig.1, because of theconsideration of the relationship between trafficimpedance and traffic load, the traffic impedanceand forecast assignment traffic volume of everypath will be continually amended. Using therelationship model between average speed andtraffic volume, we can calculate the travel timeand the traffic impedance of certain path sect-ionunder different traffic volume situation. For theroads with different technical levels, therelationship models between average speeds totraffic volume are as follows: 1) Highway: 1082.049.179AN V = (5) 2) Level 1 Roads: 11433.084.155AN V = (6) 3) Level 2 Roads: 66.091.057.112AN V = (7) 4) Level 3 Roads: 3.132.01.99AN V = (8) 5) Level 4 Roads: 0988.05.70A N V =(9) Where V is the average speed of the path section; A N is the traffic volume of thepath section.At the end, we can repeat assigning traffic volume of path sections with themethod in previous step, which is the idea of iterative capacity constraint assignment,until the traffic volume of every path section is stable.译文智能交通交通量分配预测模型介绍随着城市的可持续化发展、数字化城市的建设以及交通运输业的发展,智能交通系统(ITS)的发展越来越快。
薪酬管理体系中英文对照外文翻译文献XXX people。
XXX enterprise management。
as it has a XXX attract。
retain。
and motivate employees。
particularly key talent。
As such。
it has XXX。
retain。
objective。
XXX on the design of salary XXX.2 The Importance of Salary System DesignThe design of a salary system is XXX's success。
An effective salary system can help attract and retain employees。
XXX。
XXX them to perform at their best。
In contrast。
a poorly designed salary system can lead to employee n and XXX。
which can XXX.To design an effective salary system。
XXX factors。
including the industry。
the enterprise's size and stage of development。
and the specific needs and goals of the XXX。
XXX.3 XXXXXX。
XXX incentives can help align the XXX with those of the enterprise and its shareholders。
XXX to perform at their best.When designing equity incentives。
中英文对照翻译文章不少英文文章都是带有翻译的,为的就是方便读者在不理解文章的时候能够找到对照的意思。
下面就是店铺给大家整理的中英文对照翻译文章,希望大家喜欢。
中英文对照翻译文章篇1:A Sailor's Christmas Gift一个水手的圣诞礼物William J·Lederer威廉·J·莱德勒Last year at Christmas time my wife,three boys,and I were in France,on our way from Paristo Nice.For five wretched days ererything had gone wrong.Our hotels were“touristtraps”;our rented car broke down;we were all restless and irritable in the crowded car.OnChristmas Eve,when we checked into a dingy hotel in Nice,there was no Christmas spirit inour hearts.去年,在圣诞节期间,我和我的妻子以及我们的三个孩子,从法国踏上由巴黎到尼斯的旅途。
由于接连五天的恶劣天气,旅途上一切很不顺心。
我们下榻的旅馆尽是些敲诈勒索旅客的“陷阱”;我们租用的那辆汽车老是发生故障,在拥挤不堪的车子上大家个个显得烦躁不安。
圣诞节前夕,我们住进了尼斯的一家旅店,这家旅店又脏又暗,我们打心眼里感觉不到丝毫的节日气氛。
It was raining and cold when we went out to eat.We found a drab littlejoint shoddily decoratedfor the holiday.It smelled greasy.Only five tables in the restaurant were occupied.Therewere two German couples,two French families,and an American sailor,by himself.In thecorner a piano player listlessly played Christmas music.我们外出就餐时,天正下着小雨,天气寒冷。
中英文对照外文翻译(文档含英文原文和中文翻译)Bridge research in EuropeA brief outline is given of the development of the European Union, together withthe research platform in Europe. The special case of post-tensioned bridges in the UK is discussed. In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio: relating to the identification of voids in post-tensioned concrete bridges using digital impulse radar.IntroductionThe challenge in any research arena is to harness the findings of different research groups to identify a coherent mass of data, which enables research and practice to be better focused. A particular challenge exists with respect to Europe where language barriers are inevitably very significant. The European Community was formed in the 1960s based upon a political will within continental Europe to avoid the European civil wars, which developed into World War 2 from 1939 to 1945. The strong political motivation formed the original community of which Britain was not a member. Many of the continental countries saw Britain’s interest as being purelyeconomic. The 1970s saw Britain joining what was then the European Economic Community (EEC) and the 1990s has seen the widening of the community to a European Union, EU, with certain political goals together with the objective of a common European currency.Notwithstanding these financial and political developments, civil engineering and bridge engineering in particular have found great difficulty in forming any kind of common thread. Indeed the educational systems for University training are quite different between Britain and the European continental countries. The formation of the EU funding schemes —e.g. Socrates, Brite Euram and other programs have helped significantly. The Socrates scheme is based upon the exchange of students between Universities in different member states. The Brite Euram scheme has involved technical research grants given to consortia of academics and industrial partners within a number of the states—— a Brite Euram bid would normally be led by partners within a number of the statesan industrialist.In terms of dissemination of knowledge, two quite different strands appear to have emerged. The UK and the USA have concentrated primarily upon disseminating basic research in refereed journal publications: ASCE, ICE and other journals. Whereas the continental Europeans have frequently disseminated basic research at conferences where the circulation of the proceedings is restricted.Additionally, language barriers have proved to be very difficult to break down. In countries where English is a strong second language there has been enthusiastic participation in international conferences based within continental Europe —e.g. Germany, Italy, Belgium, The Netherlands and Switzerland. However, countries where English is not a strong second language have been hesitant participants }—e.g. France.European researchExamples of research relating to bridges in Europe can be divided into three types of structure:Masonry arch bridgesBritain has the largest stock of masonry arch bridges. In certain regions of the UK up to 60% of the road bridges are historic stone masonry arch bridges originally constructed for horse drawn traffic. This is less common in other parts of Europe as many of these bridges were destroyed during World War 2.Concrete bridgesA large stock of concrete bridges was constructed during the 1950s, 1960s and 1970s. At the time, these structures were seen as maintenance free. Europe also has a large number of post-tensioned concrete bridges with steel tendon ducts preventing radar inspection. This is a particular problem in France and the UK.Steel bridgesSteel bridges went out of fashion in the UK due to their need for maintenance as perceived in the 1960s and 1970s. However, they have been used for long span and rail bridges, and they are now returning to fashion for motorway widening schemes in the UK.Research activity in EuropeIt gives an indication certain areas of expertise and work being undertaken in Europe, but is by no means exhaustive.In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio. The example relates to the identification of voids in post-tensioned concrete bridges, using digital impulse radar.Post-tensioned concrete rail bridge analysisOve Arup and Partners carried out an inspection and assessment of the superstructure of a 160 m long post-tensioned, segmental railway bridge in Manchester to determine its load-carrying capacity prior to a transfer of ownership, for use in the Metrolink light rail system..Particular attention was paid to the integrity of its post-tensioned steel elements.Physical inspection, non-destructive radar testing and other exploratory methods were used to investigate for possible weaknesses in the bridge.Since the sudden collapse of Ynys-y-Gwas Bridge in Wales, UK in 1985, there has been concern about the long-term integrity of segmental, post-tensioned concrete bridges which may b e prone to ‘brittle’ failure without warning. The corrosion protection of the post-tensioned steel cables, where they pass through joints between the segments, has been identified as a major factor affecting the long-term durability and consequent strength of this type of bridge. The identification of voids in grouted tendon ducts at vulnerable positions is recognized as an important step in the detection of such corrosion.Description of bridgeGeneral arrangementBesses o’ th’ Barn Bridge is a 160 m long, three span, segmental, post-tensionedconcrete railway bridge built in 1969. The main span of 90 m crosses over both the M62 motorway and A665 Bury to Prestwick Road. Minimum headroom is 5.18 m from the A665 and the M62 is cleared by approx 12.5 m.The superstructure consists of a central hollow trapezoidal concrete box section 6.7 m high and 4 m wide. The majority of the south and central spans are constructed using 1.27 m long pre-cast concrete trapezoidal box units, post-tensioned together. This box section supports the in site concrete transverse cantilever slabs at bottom flange level, which carry the rail tracks and ballast.The center and south span sections are of post-tensioned construction. These post-tensioned sections have five types of pre-stressing:1. Longitudinal tendons in grouted ducts within the top and bottom flanges.2. Longitudinal internal draped tendons located alongside the webs. These are deflected at internal diaphragm positions and are encased in in site concrete.3. Longitudinal macalloy bars in the transverse cantilever slabs in the central span .4. Vertical macalloy bars in the 229 mm wide webs to enhance shear capacity.5. Transverse macalloy bars through the bottom flange to support the transverse cantilever slabs.Segmental constructionThe pre-cast segmental system of construction used for the south and center span sections was an alternative method proposed by the contractor. Current thinkingire suggests that such a form of construction can lead to ‘brittle’ failure of the ententire structure without warning due to corrosion of tendons across a construction joint,The original design concept had been for in site concrete construction.Inspection and assessmentInspectionInspection work was undertaken in a number of phases and was linked with the testing required for the structure. The initial inspections recorded a number of visible problems including:Defective waterproofing on the exposed surface of the top flange.Water trapped in the internal space of the hollow box with depths up to 300 mm.Various drainage problems at joints and abutments.Longitudinal cracking of the exposed soffit of the central span.Longitudinal cracking on sides of the top flange of the pre-stressed sections.Widespread sapling on some in site concrete surfaces with exposed rusting reinforcement.AssessmentThe subject of an earlier paper, the objectives of the assessment were:Estimate the present load-carrying capacity.Identify any structural deficiencies in the original design.Determine reasons for existing problems identified by the inspection.Conclusion to the inspection and assessmentFollowing the inspection and the analytical assessment one major element of doubt still existed. This concerned the condition of the embedded pre-stressing wires, strands, cables or bars. For the purpose of structural analysis these elements、had been assumed to be sound. However, due to the very high forces involved,、a risk to the structure, caused by corrosion to these primary elements, was identified.The initial recommendations which completed the first phase of the assessment were:1. Carry out detailed material testing to determine the condition of hidden structural elements, in particularthe grouted post-tensioned steel cables.2. Conduct concrete durability tests.3. Undertake repairs to defective waterproofing and surface defects in concrete.Testing proceduresNon-destructi v e radar testingDuring the first phase investigation at a joint between pre-cast deck segments the observation of a void in a post-tensioned cable duct gave rise to serious concern about corrosion and the integrity of the pre-stress. However, the extent of this problem was extremely difficult to determine. The bridge contains 93 joints with an average of 24 cables passing through each joint, i.e. there were approx. 2200 positions where investigations could be carried out. A typical section through such a joint is that the 24 draped tendons within the spine did not give rise to concern because these were protected by in site concrete poured without joints after the cables had been stressed.As it was clearly impractical to consider physically exposing all tendon/joint intersections, radar was used to investigate a large numbers of tendons and hence locate duct voids within a modest timescale. It was fortunate that the corrugated steel ducts around the tendons were discontinuous through the joints which allowed theradar to detect the tendons and voids. The problem, however, was still highly complex due to the high density of other steel elements which could interfere with the radar signals and the fact that the area of interest was at most 102 mm wide and embedded between 150 mm and 800 mm deep in thick concrete slabs.Trial radar investigations.Three companies were invited to visit the bridge and conduct a trial investigation. One company decided not to proceed. The remaining two were given 2 weeks to mobilize, test and report. Their results were then compared with physical explorations.To make the comparisons, observation holes were drilled vertically downwards into the ducts at a selection of 10 locations which included several where voids were predicted and several where the ducts were predicted to be fully grouted. A 25-mm diameter hole was required in order to facilitate use of the chosen horoscope. The results from the University of Edinburgh yielded an accuracy of around 60%.Main radar sur v ey, horoscope verification of v oids.Having completed a radar survey of the total structure, a baroscopic was then used to investigate all predicted voids and in more than 60% of cases this gave a clear confirmation of the radar findings. In several other cases some evidence of honeycombing in the in site stitch concrete above the duct was found.When viewing voids through the baroscopic, however, it proved impossible to determine their actual size or how far they extended along the tendon ducts although they only appeared to occupy less than the top 25% of the duct diameter. Most of these voids, in fact, were smaller than the diameter of the flexible baroscopic being used (approximately 9 mm) and were seen between the horizontal top surface of the grout and the curved upper limit of the duct. In a very few cases the tops of the pre-stressing strands were visible above the grout but no sign of any trapped water was seen. It was not possible, using the baroscopic, to see whether those cables were corroded.Digital radar testingThe test method involved exciting the joints using radio frequency radar antenna: 1 GHz, 900 MHz and 500 MHz. The highest frequency gives the highest resolution but has shallow depth penetration in the concrete. The lowest frequency gives the greatest depth penetration but yields lower resolution.The data collected on the radar sweeps were recorded on a GSSI SIR System 10.This system involves radar pulsing and recording. The data from the antenna is transformed from an analogue signal to a digital signal using a 16-bit analogue digital converter giving a very high resolution for subsequent data processing. The data is displayed on site on a high-resolution color monitor. Following visual inspection it isthen stored digitally on a 2.3-gigabyte tape for subsequent analysis and signal processing. The tape first of all records a ‘header’ noting the digital radar settings together with the trace number prior to recording the actual data. When the data is played back, one is able to clearly identify all the relevant settings —making for accurate and reliable data reproduction.At particular locations along the traces, the trace was marked using a marker switch on the recording unit or the antenna.All the digital records were subsequently downloaded at the University’s NDT laboratory on to a micro-computer.(The raw data prior to processing consumed 35 megabytes of digital data.) Post-processing was undertaken using sophisticated signal processing software. Techniques available for the analysis include changing the color transform and changing the scales from linear to a skewed distribution in order to highlight、突出certain features. Also, the color transforms could be changed to highlight phase changes. In addition to these color transform facilities, sophisticated horizontal and vertical filtering procedures are available. Using a large screen monitor it is possible to display in split screens the raw data and the transformed processed data. Thus one is able to get an accurate indication of the processing which has taken place. The computer screen displays the time domain calibrations of the reflected signals on the vertical axis.A further facility of the software was the ability to display the individual radar pulses as time domain wiggle plots. This was a particularly valuable feature when looking at individual records in the vicinity of the tendons.Interpretation of findingsA full analysis of findings is given elsewhere, Essentially the digitized radar plots were transformed to color line scans and where double phase shifts were identified in the joints, then voiding was diagnosed.Conclusions1. An outline of the bridge research platform in Europe is given.2. The use of impulse radar has contributed considerably to the level of confidence in the assessment of the Besses o’ th’ Barn Rail Bridge.3. The radar investigations revealed extensive voiding within the post-tensioned cable ducts. However, no sign of corrosion on the stressing wires had been foundexcept for the very first investigation.欧洲桥梁研究欧洲联盟共同的研究平台诞生于欧洲联盟。
中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Evasion of Law and Mandatory Rules in Private International LawINTRODUCTIONIT has often been asserted that English private international law has no doctrine of evasion of the law. It is true that English law has never developed a general doctrine, like the French one of fraude a la hi, to deal with cases of evasion. Nonetheless, evasion of the law has been recognised as a problem in at least some areas of private international law, and an increasing number of specific anti-evasion measures have been introduced in response to this. The English approach towards evasion is a pragmatic one rather than being based on any broad underlying theory. In particular, the fundamental questions have not been addressed of what is wrong with evasion of the law and how it can be dealt with most effectively. The purpose of thisarticle is to examine the present law on evasion, determine what is wrong with evasion of the law and put forward proposals for a principled approach to deal with the problem.I THE PRESENT LAW ON EV ASIONThe most obvious sense in which the law is evaded is when persons deliberately flout the law, for example a taxpayer fails to declare all his income to the Inland Revenue, or a person smuggles goods into a country in breach of import controls. In such cases the party seeking to evade the law wishes no law to apply. The private international lawyer may be concerned with this type of case, for instance the English courts may be asked to enforce a contract the performance of which involves the illegal export of goods.Of more interest to the private international lawyer, and the subject of this article, are those cases where laws are evaded by persons showing a preference for the application of one country's law rather than that of another. People can show this preference by going to another country in the expectation that that country's law will be applied to their affairs. This has happened in the sphere of family law where evasive marriages, divorces and abduction of children are well known. Evasion can also take place in the commercial sphere where the particular method of evasion takes a different form, i.e. contractual agreements that a particular law will apply. Those areas in which evasion has been recognised as being a problem: marriage, divorce, child abduction and custody, and contract, will now be examined in detail, after which some conclusions will be drawn on the nature of the approach towards evasion adopted under the present law.A. Evasive MarriagesEvasive marriages have been a well known phenomenon since the earliest days of conflict of laws. Starting with Brook v. Brook in the middle of the nineteenth century there has been a spate of reported cases involving English couples going to Denmark or Germany to marry in order to evade the English law on the prohibited degrees of marriage. After the marriage the couple would return to live in England. The English courts strongly objected to the attempt to evade English law in these cases and refused to recognise the foreign marriage. The technique for dealing with the evasion was to classify the issue in the case as being one of essential validity and to apply the law of the domicile of the parties, England, to the question of the validity of the foreign marriage. In other words, the courts moulded their private international rule on capacity to enter a marriage to stop evasion of the law. The gradual relaxationin the prohibited degrees under English law has largely meant the end of such instances of evasion. However, it still remains the case that, for example, an uncle will be unable to marry his niece in England but he may be able to do so under some foreign systems of law.Better known to laymen than the Danish marriages cases are the Gretna Green marriage cases.At one time young English couples would elope to Scotland in order to evade the English requirement of parental consent for the marriage of a child between the ages of 16 and 21. Such a child could marry in Scotland without parental consent, there being very much less formality for marriage under Scots law. The reduction of the age of majority to 18 in England has meant that in most cases there is no longer any need for young couples to go to Scotland to enter into a valid marriage. However, the attitude of the English courts towards Gretna Green marriages is instructive and contrasts strongly with their attitude towards the Danish marriages. No objection was made to the parties evading the English requirement of parental consent by going to marry in Scotland and these Scots marriages were recognised as being valid. The issue was classified as one of formal validity and the law of the place of celebration was applied to the marriage, i.e. Scots law. The private international law rule was not moulded to stop evasion.The traffic in evasive marriage was not all one way. There are well known instances of French couples coming to England in order to evade stringent French requirements of parental consent to the marriage of children up to the age of 25. Not surprisingly, in the light of the Gretna Green marriages, these English marriages were regarded as being valid, despite the clear evasion of French law by the parties.B. Evasive DivorcesIn 1868 in Shaw v. Gould Lord Westbury, speaking in the context of a Scots divorce obtained by an English domiciliary said that:No nation can be required to admit that its domiciled subjects may lawfully resort to another country for the purpose of evading the laws under which they live. When they return to the country of their domicile, bringing back with them a foreign judgment so obtained, the tribunals of the domicile are entitled or even bound, to reject such judgment, as having no extra-territorial force or validity.A hundred years later there was considerable judicial concern that, whilst the rules on recognition of foreign divorces should be liberalised and made more flexible, "quickie" divorces obtained abroad after a short period of residence should not be recognised. The technique for achieving this was to introduce at common law a realand substantial connection test as a basis for the recognition of foreign divorces. A petitioner who was merely temporarily in, for example, Nevada when he obtained his divorce, would not be able to satisfy this test and the foreign divorce would not be recognised. The attitude subsequently changed and the Recognition of Divorces and Legal Separations Act 1971 enshrined the Law Commission's philosophy that, if there had been forum shopping, the harm had already been done, and in order to prevent a limping marriage the foreign divorce should still be recognised in England.This still remains the general view to this day. However, there are two specific statutory anti-evasion provisions which constitute exceptions to this general rule. Both provisions are concerned with extra-judicial divorces. There was a concern shown by the judiciary and then by Parliament that parties should not be able to evade the English system of divorces granted by courts and the English law on financial provision on divorce by obtaining in England an extrajudicial divorce. Since 1974 such extra-judicial divorces have therefore been denied recognition. There was then a concern that the particular statutory provision denying recognition to this type of divorce could itself be evaded by English residents going abroad, for example on a day trip to France, to obtain an extra-judicial divorce which, because it would be recognised in their foreign domicile, would be recognised in England. The latest version of the relevant statutory anti-evasion provision seeks to prevent this by denying recognition to extra-judicial divorces obtained, without proceedings, outside the British Islands if either spouse had for a period of one year immediately preceding the institution of the proceedings habitually been resident in the United Kingdom. The latter provision does nothing to prevent an English domiciliary from evading his financial responsibilities to his spouse by obtaining an extra-judicial divorce in the state of his nationality, and then having this recognised in England. However, the Court of Appeal in Chaudhary v. Chaudhary held that, in such circumstances, recognition of the divorce would be contrary to public policy, thereby preventing the evasion.C. Child Abduction and CustodyThe most recent problem of evasion to arise in the family law area involves cases of child abduction and custody. If the parents of a child are in dispute over the custody of a child and the parent who has not been granted custody by the English courts seizes the child and removes it abroad, there is a deliberate flouting of the English law in that the English custody order has been disregarded. This is regarded as a very serious matter and Parliament has intervened to introduce new criminal offencesconcerned with taking a child under the age of 16 out of the jurisdiction without consent. There may also be an element of the errant parent preferring the application of a foreign law in that this parent may seek and obtain a custody order abroad. The problem is essentially one of getting a foreign court to recognise the English custody order or the custody rights (if no order has been made) and return the child to England. There are now international conventions on child abduction and custody, and if the child is removed to a country which is a party to these conventions, that country may be obliged to recognise the English custody order and rights. As far as the United Kingdom is concerned the international conventions were brought into effect by the Child Abduction and Custody Act 1985, which requires English courts to recognise foreign custody orders and rights in certain circumstances.D. Evasive ContractsIn contract cases the judiciary appear on the face of it to have a strong objection to evasion of the law. In theory the requirement laid down in Vita Food Products v. Unus Shipping that the parties' choice of the applicable law must be made in good faith, will stop all cases of evasion of the law. Even if the case involves an issue of formal validity of the contract the bona fides doctrine can still come into play. This contrasts with marriage cases where, as has been seen, the evasion of formal requirements is not objected to. However, in practice the requirement of a bona fide choice does not appear to restrict the parties' freedom to choose the applicable law. There is no reported English case in which the parties' choice has been struck out on this ground. It is important to notice that, although the Vita Foods Case introduced a restriction on party autonomy, this restriction did not apply on the facts of the case and the result was to allow parties to evade the Hague Rules.Much more important than the common law doctrine of bona fides are the specific statutory anti-evasion provisions that have been introduced into the area of contract. The most famous of these is contained in section 27 of the Unfair Contract Terms Act 1977. This section prevents evasion of English law, or the law of any other part of the United Kingdom, by restricting the parties' freedom to choose a foreign law. It provides that the Act and the protection it gives to consumers still has effect if the choice of law appears "to have been imposed wholly or mainly for the purpose of enabling the party imposing it to evade the operation of this Act". The section goes on to provide as an alternative that the Act will apply, despite the parties' choice, if "in the making of the contract one of the parties dealt as consumer, and he was then habitually resident in the United Kingdom, and the essential steps necessary for themaking of the contract were taken there, whether by him or by others on his behalf". The section, more controversially, also prevents parties from evading foreign law. It restricts the right of parties, whose contract has a foreign objective proper law, to choose the law of part of the United Kingdom by providing that, in such a case, certain sections of the Act will not apply as part of the proper law.Another example of a statutory anti-evasion provision is to be found in the Carriage of Goods by Sea Act 1971, implementing the Hague-Visby Rules. Under the old Hague Rules there was a problem of people evading those Rules by the insertion of a choice of law clause in their contract. This gap was closed by the insertion in the Hague-Visby Rules of a new Article X to replace the original Article X in the Hague Rules. This lays down the territorial scope of the new Rules, and is coupled with a provision in the implementing legislation which states that the Rules, as set out in the Schedule to the Carriage of Goods by Sea Act 1971, shall have the force of law. According to the House of Lords in The Hollandia the intention of Parliament was for the new Rules to apply whenever the case comes within Article X, regardless of whether there is a foreign proper law. Their Lordships were concerned to interpret the Act and the Hague-Visby Rules in such a way as to prevent the possibility of their being evaded. As Lord Diplock said:[the Hague-Visby Rules]should be given a purposive rather than a narrow literalistic construction, particularly wherever the adoption of a literalist construction would enable the stated purpose of the international convention, viz., the unification of domestic laws of the contracting states relating to bills of lading, to be evaded by the use of colourable devices that, not being expressly referred to in the rules, are not specifically prohibited.An Ad Hoc ApproachWhat is noticeable about the present law on evasion is that no general principle has been developed to explain why evasion is regarded as objectionable in some cases but not in others. The law seems quite inconsistent, with evasive Gretna Green marriages being regarded as perfectly acceptable but evasive Danish marriages being regarded as beyond the pale. The whole approach towards evasion is essentially an ad hoc one; not only are different types of evasive marriage treated differently, but also evasive marriages are treated in isolation from evasive divorces or evasive contracts. This ad hoc approach extends to the technique for dealing with those cases where evasion is regarded as objectionable. In some cases specific statutory anti-evasionprovisions have been adopted to deal with evasion; in other cases the technique has been to mould common law rules to deal with the problem. The result is that the present law on evasion can be seen to be unduly complex, uncertain and inconsistent.A more principled approach is needed but before this can be developed an answer is required to the fundamental question, which so far has been ignored under English law: what is wrong with evasion of the law?II WHAT IS WRONG WITH EV ASION OF THE LAW?A. Moral GuiltThe term evasion is a loaded one, with connotations of shifty, underhand behaviour. It is easy to fall into the trap of automatically assuming that a desire to evade the law is in itself morally reprehensible. This is what has happened in the area of contract choice of law. The requirement that the choice of the applicable law must be made in good faith concentrates on the motives of the parties, and if these are impure the choice is necessarily regarded as a bad one and to be struck out. Yet there is nothing wrong in principle with parties choosing the law to govern their transactions. Indeed, it is very desirable that they should make such a choice. Party autonomy produces certainty in the law and upholds the expectations of the parties. In some cases this choice may be made on the basis that the application of one law is more convenient than that of another. In other cases the choice may be made on the basis that the content of one law is preferred by the parties to that of another. For example, the parties to an international insurance or shipping contract may choose the law of England to apply, despite the fact that there is no connection with this country, because they regard English law as being well developed in this area. No one would stigmatise this type of conduct. Is it any worse if the parties choose a law to apply because they prefer some specific provisions of that law to that of some other country? What the parties are usually trying to do in such a case is to ensure that a law is applied which provides that their transaction, whether it is a commercial contract or a marriage, is valid. This should not be a cause for concern, nor should it be condemned on the mistaken basis that the parties' motives are impure. Moreover, there is something slightly hypocritical in the judiciary making this type of moral judgment when they themselves are quite prepared to escape from applying a law the content of which is not to their liking by using such devices as public policy and renvoi.Whilst there is nothing wrong with the parties' motives in cases of evasion, this does not mean that the practice is unobjectionable. In some cases the evasion mayinvolve unfairness to someone else; the evasion may even be against the national interest.译文:国际私法中的法律规避和强制性规则引言人们常常说,英国的国际私法学说中没有法律规避原则。
中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Heat treatment of metalThe generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is the pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking place at any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the pointsrepresenting partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stress relieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes. The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallicmaterials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the work piece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting a quenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃ ). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 ℃(150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is toheat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.The generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking place at any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the points representing partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stress relieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes.The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallic materials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the workpiece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting aquenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 ℃(150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is to heat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.The generally accepted definition for heat treating metals and metal alloys is “heating and cooling a solid metal or alloy in a way so as to obtain specific conditions or properties.” Heating for the sole purpose of hot working (as in forging operations) is excluded from this definition.Likewise,the types of heat treatment that are sometimes used for products such as glass or plastics are also excluded from coverage by this definition.Transformation CurvesThe basis for heat treatment is the time-temperature-transformation curves or TTT curves where,in a single diagram all the three parameters are plotted.Because of the shape of the curves,they are also sometimes called C-curves or S-curves.To plot TTT curves,the particular steel is held at a given temperature and the structure is examined at predetermined intervals to record the amount of transformation taken place.It is known that the eutectoid steel (T80) under equilibrium conditions contains,all austenite above 723℃,whereas below,it is pearlite.To form pearlite,the carbon atoms should diffuse to form cementite.The diffusion being a rate process,would require sufficient time for complete transformation of austenite to pearlite.From different samples,it is possible to note the amount of the transformation taking placeat any temperature.These points are then plotted on a graph with time and temperature as the axes.Through these points,transformation curves can be plotted as shown in Fig.1 for eutectoid steel.The curve at extreme left represents the time required for the transformation of austenite to pearlite to start at any given temperature.Similarly,the curve at extreme right represents the time required for completing the transformation.Between the two curves are the points representing partial transformation. The horizontal lines Ms and Mf represent the start and finish of martensitic transformation.Classification of Heat Treating ProcessesIn some instances,heat treatment procedures are clear-cut in terms of technique and application.whereas in other instances,descriptions or simple explanations are insufficient because the same technique frequently may be used to obtain different objectives.For example, stressrelieving and tempering are often accomplished with the same equipment and by use of identical time and temperature cycles.The objectives,however,are different for the two processes.The following descriptions of the principal heat treating processes are generally arranged according to their interrelationships.Normalizing consists of heating a ferrous alloy to a suitable temperature (usually 50°F to 100°F or 28℃ to 56℃) above its specific upper transformation temperature.This is followed by cooling in still air to at least some temperature well below its transformation temperature range.For low-carbon steels, the resulting structure and properties are the same as those achieved by full annealing;for most ferrous alloys, normalizing and annealing are not synonymous.Normalizing usually is used as a conditioning treatment, notably for refining the grains of steels that have been subjected to high temperatures for forging or other hot working operations. The normalizing process usually is succeeded by another heat treating operation such as austenitizing for hardening, annealing, or tempering.Annealing is a generic term denoting a heat treatment that consists of heating to and holding at a suitable temperature followed by cooling at a suitable rate. It is used primarily to soften metallic materials, but also to simultaneously produce desired changes in other properties or in microstructure. The purpose of such changes may be, but is not confined to, improvement of machinability, facilitation of cold work (known as in-process annealing), improvement of mechanical or electrical properties, or to increase dimensional stability. When applied solely to relive stresses, it commonly is called stress-relief annealing, synonymous with stress relieving.When the term “annealing” is applied to ferrous alloys without qualification, full annealing is applied. This is achieved by heating above the alloy’s transformation temperature, then applying a cooling cycle which provides maximum softness. This cycle may vary widely, depending on composition and characteristics of the specific alloy.Quenching is a rapid cooling of a steel or alloy from the austenitizing temperature by immersing the workpiece in a liquid or gaseous medium. Quenching medium commonly used include water, 5% brine, 5% caustic in an aqueous solution, oil, polymer solutions, or gas (usually air or nitrogen).Selection of a quenching medium depends largely on the hardenability of material and the mass of the material being treating (principally section thickness).The cooling capabilities of the above-listed quenching media vary greatly. In selecting a quenching medium, it is best to avoid a solution that has more cooling power than is needed to achieve the results, thus minimizing the possibility of cracking and warp of the parts being treated. Modifications of the term quenching include direct quenching, fog quenching, hot quenching, interrupted quenching, selective quenching, spray quenching, and time quenching.Tempering. In heat treating of ferrous alloys, tempering consists of reheating the austenitized and quench-hardened steel or iron to some preselected temperature that is below the lower transformation temperature (generally below 1300 ℃ or 705 ℃). Tempering offers a means of obtaining various combinations of mechanical properties. Tempering temperatures used for hardened steels are often no higher than 300 oF (150 ℃). The term “tempering” should not be confused with either process annealing or stress relieving. Even though time and temperature cycles for the three processes may be the same, the conditions of the materials being processed and the objectives may be different.Stress relieving. Like tempering, stress relieving is always done by heating to some temperature below the lower transformation temperature for steels and irons. For nonferrous metals, the temperature may vary from slightly above room temperature to several hundred degrees, depending on the alloy and the amount of stress relief that is desired.The primary purpose of stress relieving is to relieve stresses that have been imparted to the workpiece from such processes as forming, rolling, machining or welding. The usual procedure is to heat workpiece to the pre-established temperature long enough to reduce the residual stresses (this is a time-and temperature-dependent operation) to an acceptable level; this is followed by cooling at a relatively slow rate to avoid creation of new stresses.金属热处理对于热处理金属和金属合金普遍接受的定义是对于热处理金属和金属合金普遍接受的定义是“加热和冷却的方式了坚实的金“加热和冷却的方式了坚实的金属或合金,以获得特定条件或属性为唯一目的。
中英文对照外文翻译(文档含英文原文和中文翻译)外文文献17In the first book we considered the idea merely as such, that is, only according to its general form. It is true that as far as the abstract idea, the concept, is concerned, we obtained a knowledge of it in respect of its content also, because it has content and meaning only in relation to the idea of perception, with out which it would be worthless and empty. Accordingly, directing our attention exclusively to the idea of perception, we shall now endeavour to arrive at a knowledge of its content, its more exact definition, and the forms which it presents to us. And it will specially interest us to find an explanation of its peculiar significance, that significance which is otherwise merely felt, but on account of which it is that these pictures do not pass by us entirely strange and meaningless, as they must other wise do, but speak to us directly, are understood, and obtain an interest which concerns our whole nature.We direct our attention to mathematics, natural science, and philosophy, for each of these holds out the hope that it will afford us a part of the explanation we desire. Now, taking philosophy first, we find that it is like a monster with many heads, each of which speaks a different language. They are not, indeed, all at variance on the point we are here considering, the significance of the idea of perception. For, with the exception of the Sceptics and the Idealists, the others, for the most part, speak very much in the same way of an object which constitutes the basis of the idea, and which is indeed different in its whole being and nature from the idea, but yet isin all points as like it as one egg is to another. But this does not help us, for we are quite unable to distinguish such an object from the idea; we find that they are one and the same; for every object always and for ever presupposes a subject, and therefore remains idea, so that we recognised objectivity as belonging to the most universal form of the idea, which is the division into subject and object. Further, the principle of sufficient reason, which is referred to in support of this doctrine, is for us merely the form of the idea, the orderly combination of one idea with another, but not the combination of the whole finite or infinite series of ideas with something which is not idea at all, and which cannot therefore be presented in perception. Of the Sceptics and Idealists we spoke above, in examining the controversy about the reality of the outer world.If we turn to mathematics to look for the fuller knowledge we desire of the idea of perception, which we have, as yet, only understood generally, merely in its form, we find that mathematics only treats of these ideas so far as they fill time and space, that is, so far as they are quantities. It will tell us with the greatest accuracy thehow-many and the how-much; but as this is always merely relative, that is to say, merely a comparison of one idea with others, and a comparison only in the one respect of quantity, this also is not the information we are principally in search of.Lastly, if we turn to the wide province of natural science, which is divided into many fields, we may, in the first place, make a general division of it into two parts. It is either the description of forms, which I call Morphology, or the explanation of changes, which I call Etiology. The first treats of the permanent forms, the second of the changing matter, according to the laws of its transition from one form to another.The first is the whole extent of what is generally called natural history. It teaches us, especially in the sciences of botany and zoology, the various permanent, organised, and therefore definitely determined forms in the constant change of individuals; and these forms constitute a great part of the content of the idea of perception. In natural history they are classified, separated, united, arranged according to natural and artificial systems, and brought under concepts which make a general view and knowledge of the whole of them possible. Further, an infinitely fine analogy both in the whole and in the parts of these forms, and running through them all (unité de plan), is established, and thus they may be com pared to innumerable variations on a theme which is not given. The passage of matter into these forms, that is to say, the origin of individuals, is not a special part of natural science, for every individual springs from its like by generation, which is everywhere equally mysterious, and has as yet evaded definite knowledge. The little that is known on the subject finds its place in physiology, which belongs to that part of natural science I have called etiology. Mineralogy also, especially where it becomes geology, inclines towards etiology, though it principally belongs to morphology. Etiology proper comprehends all those branches of natural science in which the chief concern is the knowledge of cause and effect. The sciences teach how, according to an invariable rule, one condition of matter is necessarily followed by a certain other condition; how one change necessarily conditions and brings about a certain other change; this sort of teaching is called explanation. The principal sciences in this department are mechanics, physics, chemistry, and physiology.If, however, we surrender ourselves to its teaching, we soon become convinced that etiology cannot afford us the information we chiefly desire, any more than morphology. The latter presents to us innumerable and in finitely varied forms, which are yet related by an unmistakable family likeness. These are for us ideas, and when only treated in this way, they remain always strange to us, and stand before us like hieroglyphics which we do not understand. Etiology, on the other hand, teaches us that, according to the law of cause and effect, this particular condition of matter brings about that other particular condition, and thus it has explained it and performed its part. However, it really does nothing more than indicate the orderlyarrangement according to which the states of matter appear in space and time, and teach in all cases what phenomenon must necessarily appear at a particular time in a particular place. It thus determines the position of phenomena in time and space, according to a law whose special content is derived from experience, but whose universal form and necessity is yet known to us independently of experience. But it affords us absolutely no information about the inner nature of any one of these phenomena: this is called a force of nature, and it lies outside the province of causal explanation, which calls the constant uniformity with which manifestations of such a force appear whenever their known conditions are present, a law of nature. But this law of nature, these conditions, and this appearance in a particular place at a particular time, are all that it knows or ever can know. The force itself which manifests itself, the inner nature of the phenomena which appear in accordance with these laws, remains always a secret to it, something entirely strange and unknown in the case of the simplest as well as of the most complex phenomena. For although as yet etiology has most completely achieved its aim in mechanics, and least completely in physiology, still the force on account of which a stone falls to the ground or one body repels another is, in its inner nature, not less strange and mysterious than that which produces the movements and the growth of an animal. The science of mechanics presupposes matter, weight, impenetrability, the possibility of communicating motion by impact, inertia and so forth as ultimate facts, calls them forces of nature, and their necessary and orderly appearance under certain conditions a law of nature. Only after this does its explanation begin, and it consists in indicating truly and with mathematical exactness, how, where and when each force manifests itself, and in referring every phenomenon which presents itself to the operation of one of these forces. Physics, chemistry, and physiology proceed in the same way in their province, only they presuppose more and accomplish less. Consequently the most complete etiological explanation of the whole of nature can never be more than an enumeration of forces which cannot be explained, and a reliable statement of the rule according to which phenomena appear in time and space, succeed, and make way for each other. But the inner nature of the forces which thus appear remains unexplained by such an explanation, which must confineitself to phenomena and their arrangement, because the law which it follows does not extend further. In this respect it may be compared to a section of a piece of marble which shows many veins beside each other, but does not allow us to trace the course of the veins from the interior of the marble to its surface. Or, if I may use an absurd but more striking comparison, the philosophical investigator must always have the same feeling towards the complete etiology of the whole of nature, as a man who, without knowing how, has been brought into a company quite unknown to him, each member of which in turn presents another to him as his friend and cousin, and therefore as quite well known, and yet the man himself, while at each introduction he expresses himself gratified, has always the question on his lips: "But how the deuce do I stand to the whole company?"Thus we see that, with regard to those phenomena which we know only as our ideas, etiology can never give us the desired information that shall carry us beyond this point. For, after all its explanations, they still remain quite strange to us, as mere ideas whose significance we do not understand. The causal connection merely gives us the rule and the relative order of their appearance in space and time, but affords us no further knowledge of that which so appears. Moreover, the law of causality itself has only validity for ideas, for objects of a definite class, and it has meaning only in so far as it presupposes them. Thus, like these objects themselves, it always exists only in relation to a subject, that is, conditionally; and so it is known just as well if we start from the subject, i.e., a priori, as if we start from the object, i.e., a posteriori. Kant indeed has taught us this.But what now impels us to inquiry is just that we are not satisfied with knowing that we have ideas, that they are such and such, and that they are connected according to certain laws, the general expression of which is the principle of sufficient reason. We wish to know the significance of these ideas; we ask whether this world is merely idea; in which case it would pass by us like an empty dream or a baseless vision, not worth our notice; or whether it is also something else, something more than idea, and if so, what. Thus much is certain, that this something we seek for must be completely and in its whole nature different from the idea; that the forms and laws of the idea must therefore be completely foreign to it; further, thatwe cannot arrive at it from the idea under the guidance of the laws which merely combine objects, ideas, among themselves, and which are the forms of the principle of sufficient reason.Thus we see already that we can never arrive at the real nature of things from without. However much we investigate, we can never reach anything but images and names. We are like a man who goes round a castle seeking in vain for an entrance, and sometimes sketching the façades. And yet this is the method that has been followed by all philosophers before me.18In fact, the meaning for which we seek of that world which is present to us only as our idea, or the transition from the world as mere idea of the knowing subject to whatever it may be besides this, would never be found if the investigator himself were nothing more than the pure knowing subject (a winged cherub without a body). But he is himself rooted in that world; he finds himself in it as an individual, that is to say, his knowledge, which is the necessary supporter of the whole world as idea, is yet always given through the medium of a body, whose affections are, as we have shown, the starting-point for the understanding in the perception of that world. His body is, for the pure knowing subject, an idea like every other idea, an object among objects. Its movements and actions are so far known to him in precisely the same way as the changes of all other perceived objects, and would be just as strange and incomprehensible to him if their meaning were not explained for him in an entirely different way. Otherwise he would see his actions follow upon given motives with the constancy of a law of nature, just as the changes of other objects follow upon causes, stimuli, or motives. But he would not understand the influence of the motives any more than the connection between every other effect which he sees and its cause. He would then call the inner nature of these manifestations and actions of his body which he did not understand a force, a quality, or a character, as he pleased, but he would have no further insight into it. But all this is not the case; indeed, the answer to the riddle is given to the subject of knowledge who appears as an individual, and the answer is will. This and this alone gives him the key to his own existence, reveals to him the significance, shows him the inner mechanism of hisbeing, of his action, of his movements. The body is given in two entirely different ways to the subject of knowledge, who becomes an individual only through his identity with it. It is given as an idea in intelligent perception, as an object among objects and subject to the laws of objects. And it is also given in quite a different way as that which is immediately known to every one, and is signified by the word will. Every true act of his will is also at once and without exception a movement of his body. The act of will and the movement of the body are not two different things objectively known, which the bond of causality unites; they do not stand in the relation of cause and effect; they are one and the same, but they are given in entirely different ways, — immediately, and again in perception for the understanding. The action of the body is nothing but the act of the will objectified, i.e., passed into perception. It will appear later that this is true of every movement of the body, not merely those which follow upon motives, but also involuntary movements which follow upon mere stimuli, and, indeed, that the whole body is nothing but objectified will, i.e., will become idea. All this will be proved and made quite clear in the course of this work. In one respect, therefore, I shall call the body the objectivity of will; as in the previous book, and in the essay on the principle of sufficient reason, in accordance with the one-sided point of view intentionally adopted there (that of the idea), I called it the immediate object. Thus in a certain sense we may also say that will is the knowledge a priori of the body, and the body is the knowledge a posteriori of the will. Resolutions of the will which relate to the future are merely deliberations of the reason about what we shall will at a particular time, not real acts of will. Only the carrying out of the resolve stamps it as will, for till then it is never more than an intention that may be changed, and that exists only in the reason in abstracto. It is only in reflection that to will and to act are different; in reality they are one. Every true, genuine, immediate act of will is also, at once and immediately, a visible act of the body. And, corresponding to this, every impression upon the body is also, on the other hand, at once and immediately an impression upon the will. As such it is called pain when it is opposed to the will; gratification or pleasure when it is in accordance with it. The degrees of both are widely different. It is quite wrong, however, to call pain and pleasure ideas, for they are by no meansideas, but immediate affections of the will in its manifestation, the body; compulsory, instantaneous willing or not-willing of the impression which the body sustains. There are only a few impressions of the body, which do not touch the will, and it is through these alone that the body is an immediate object of knowledge, for, as perceived by the understanding, it is already an indirect object like all others. These impressions are, therefore, to be treated directly as mere ideas, and excepted from what has been said. The impressions we refer to are the affections of the purely objective senses of sight, hearing, and touch, though only so far as these organs are affected in the way which is specially peculiar to their specific nature. This affection of them is so excessively weak an excitement of the heightened and specifically modified sensibility of these parts that it does not affect the will, but only furnishes the understanding with the data out of which the perception arises, undisturbed by any excitement of the will. But every stronger or different kind of affection of these organs of sense is painful, that is to say, against the will, and thus they also belong to its objectivity. Weakness of the nerves shows itself in this, that the impressions which have only such a degree of strength as would usually be sufficient to make them data for the understanding reach the higher degree at which they influence the will, that is to say, give pain or pleasure, though more often pain, which is, however, to some extent deadened and inarticulate, so that not only particular tones and strong light are painful to us, but there ensues a generally unhealthy and hypochondriacal disposition which is not distinctly understood. The identity of the bodv and the will shows itself further, among other ways, in the circumstance that every vehement and excessive movement of the will, i.e., every emotion, agitates the body and its inner constitution directly, and disturbs the course of its vital functions. This i s shown in detail in “Will in Nature” p. 27 of the second edition and p.28 of the third.外文文献翻译:17在第一篇里我们只是把表象作为表象,从而也只是在普遍的形式上加以考察。
中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Housing Problems and Options for the Elderly 1. IntroductionHousing is a critical element in the lives of older persons. The affordability of housing affects the ability of the elderly to afford other necessities of life such as food and medical care. Housing that is located near hospitals and doctors, shopping, transportation, and recreational facilities can facilitate access to services that can enhance the quality of life. Housing can also be a place of memories of the past and a connection to friends and neighbors. Housing with supportive features and access to services can also make it possible for persons to age in place. In this session, we will be examining housing problems andoptions for the elderly. Along the way, we will be testing your housing IQ with a series of questions and exercises.2. Housing Situation of Older PersonsHow typical is the housing situation of the olders?We will begin by examining five areas :(1)Prevalence of home ownership (2)Length of stay in current residence (3)Living arrangements (4)Attachments of older persons to where they live (5)Moving behavior.With whom older persons live can influence housing affordability, space needs, and the ability to age in place. About 54% of older persons live with their spouses, 31% live alone, almost 13% live with related persons other than their spouse and about 2% live with unrelated persons. With increasing age, older persons (primarily women) are more likely to live alone or with a relative other than a spouse. Frail older women living alone are the persons most likely to reside in homes with ‘extra’ rooms and to need both physically supportive housing features and services to "age in place". This segment of the population is also the group most likely to move to more supportive housing settings such as assisted living.Many older persons have strong psychological attachments to their homes related to length of residence. The home often represents the place where they raised their children and a lifetime of memories. It is also a connection to an array of familiar persons such as neighbors and shopkeepers as well as near by places including houses of worship, libraries and community services. For manyolder persons, the home is an extension of their own personalities which is found in the furnishings . In addition, the home can represent a sense of economic security for the future, especially for homeowners who have paid off their mortgages. For owners, the home is usually their most valuable financial asset. The home also symbolizes a sense of independence in that the resident is able to live on his or her own. For these types of reasons, it is understandable that in response to a question about housing preferences, AARP surveys of older persons continue to find that approximately 80% of older persons report that what they want is to "stay in their own homes and never move." This phenomena has been termed the preference to "age in place."Although most older persons move near their current communities, some seek retirement communities in places with warmer weather in the southwest, far west and the south.3. The Federal Government's Housing Programs for the ElderlyThe federal government has had two basic housing strategies to address housing problems of the elderly. One strategy, termed the "supply side" approach, seeks to build new housing complexes such as public housing and Section 202 housing for older persons. Public housing is administered by quasi-governmental local public housing authorities. Section 202 Housing for the elderly and disabled is sponsored by non-profit organizations including religious and non-sectarian organizations. Approximately 1.5 million olderpersons or 3% of the elderly population live in federally assisted housing, with about 387,000 living in Section 202 housing. Over time, the government has shifted away from such new construction programs because of the cost of such housing, the problems that a number of non-elderly housing programs have experienced, and a philosophy that the government should no longer be directly involved with the building of housing. Section 202 housing, a very popular and successful program, is one of the few supply-side programs funded by the federal government, although the budget allocation during the last ten years has allowed for the construction of only about 6,000 units per year compared to a high of almost 20,000 units in the late 1970s. Instead of funding new construction, federal housing initiatives over the last decade have emphasized ‘demand side’ subsidies that provide low-income renters with a certificate or a voucher that they can use in a variety of multiunit settings, including apartments in the private sector that meet rental and condition guidelines. These vouchers and certificates are aimed at reducing excessive housing costs. Some certificates are termed ‘project based’ subsidies and are tied to federally subsidized housing such as Section 202. Because housing programs are not an entitlement, however, supply-side and demand side programs together are only able to meet the needs of about 1/3 of elderly renters who qualify on the basis of income.While advocates for housing have been trying to hold on to the existing programs in the face of huge budget cuts at HUD, much of the attention has been shifting towards meeting the shelter and service needs of the frail elderly. This emphasis reflects the increasing number of older persons in their eightiesand nineties who need a physically supportive environment linked with services. This group of older persons includes a high percentage of older residents of public and Section 202 housing. Initially built for independent older persons who were initially in the late sixties and early seventies, this type of housing now includes older persons in their eighties and nineties, many of whom have aged in place. Consequently, the government is faced with creating strategies to bring services into these buildings and retrofit them to better suit the needs of frail older persons. A major initiative of the early 1990s, which may be stalled by current budget problems at HUD, has been for the federal government to pay for service coordinators to assess the needs of residents of government assisted housing complexes and link them with services. As of 1998, there were approximately 1,000 service coordinators attached to government assisted housing complexes across the country.4. The Housing Continuum: A Range of Options for ElderlyA long-standing assumption in the field of housing has been that as persons become more frail, they will have to move along a housing continuum from one setting to another. As the figure on housing options suggests, along this continuum are found a range of housing options including single family homes, apartments, congregate living, assisted living, and board and care homes (Kendig & Pynoos, 1996). The end point of the housing continuum has been thenursing home. These options vary considerably in terms of their availability, affordability, and ability to meet the needs of very frail older persons.The concept of a continuum of supportive care is based on the assumption that housing options can be differentiated by the amount and types of services offered; the supportiveness of the physical setting in terms of accessibility, features, and design; and the competency level of the persons to whom the housing is targeted. The figure on housing options indicates how such options generally meet the needs of older persons who are categorized,as independent, semi-dependent and dependent. Semi-dependent older persons can be thought of as needing some assistance from other persons with instrumental activities of daily living (IADLs) such as cooking, cleaning, and shopping. In addition to needing assistance with some IADLs, dependent older persons may require assistance with more basic activities such as toileting, eating and bathing. Although semi-dependent and dependent older persons can be found throughout the housing continuum, independent older persons are very unlikely to reside in housing types such as assisted living specifically designed and equipped to meet the needs of frail older persons unless their spouses require these needs.Although the continuum of housing identifies a range of housing types, there is increasing recognition that frail older persons do not necessarily have to move from one setting to another if they need assistance. Semi-dependent or dependent older persons can live in a variety of settings, including their own homes and apartments, if the physical environment is made more supportive, caregivers are available to provide assistance and affordable services areaccessible.5. ConclusionsHousing plays a critical role in the lives of older persons. Most older homeowners who function independently express a high level of satisfaction with their dwelling units. However, high housing costs, especially for renters, remain a financial burden for many older persons and problems associated with housing condition persist especially for low- income renters and persons living in rural areas. Federal housing programs such as public housing, Section 202 housing, and Section 8 housing certificates have only been able to address the basic housing problems of only about one-third of eligible older persons because of limited budgets. Moreover, a shortage of viable residential options exists for frail older persons. Up until the last decade, housing for the elderly was conceived of primarily as shelter. It has become increasingly recognized that frail older persons who needed services and physically supportive features often had to move from their homes or apartments to settings such as board and care or nursing homes to receive assistance. Over time, however, the concept of a variety of housing types that can be linked has replaced the original idea of the continuum of housing. It is possible for frail older persons to live in a variety of existing residential settings, including their own homes and apartments with the addition of services and home modifications. Consequently, the last decade has seen a number of efforts to modify homes, add service coordinators to multi-unit housing and create options such as accessory and ECHO units. Although thesestrategies have been enhanced by a somewhat greater availability of home care services, Medicaid policy still provides incentives to house frail older persons in nursing homes. The most visible development in the field of housing for frail older persons has been the growth of private sector assisted living which is now viewed by many state governments as a residential alternative to nursing homes. The AL movement itself has raised a number of regulatory and financing issues that cross-cut housing and long term care such as what constitutes a residential environment, insuring that residents can age in place, accommodating resident preferences, protecting the rights of individuals and insuring quality of care. Nevertheless, the emergence of AL along with a wider range of other housing options holds out the promise that older persons will have a larger range of choices among living arrangements.译文:老年人的住宅问题与选择一、简介住宅在老年人生活的极为重要。
Human Geomatics in Urban Design—Two Case Studies在城市设计中的人类地理信息学——两个案例研究Małgorzata Hanzl1,*,Karol Dzik2,Paulina Kowalczyk2,Krystian Kwieciński2,Ewa Stankiewicz2and AgataŁ.Wierzbicka2Abstract:The mapping of different aspects of urban phenomena and their relation to thephysical cityscape has been greatly extended by the use of geomatics.The tradition to basereasoning on‗understanding the world‘dates from the time of Aristotle.The extensionplan for Barcelona(Eixample),developed by Cerdà,which opened the era of modern urbanplanning,was preceded by analyses of rich data,describing both detailed demographicissues and physical structures.The contemporary,postmodernist city planning continuesthis tradition,although a shift towards analyses of more human-related issues can beobserved,covering,inter alia,citizens‘perception,cultural differences and patterns ofhuman activities with regard to distinct social groups.The change towards a morehuman-related perspective and the inclusion of urban morphology analyses are directconsequences of this trend.The required data may be gathered within a crowd-sourcingparticipation process.According to communicative planning theory,communication withthe wider public is indispensable in order to achieve the best results,and can be realizedwith the use of sophisticated IT tools.Evidence-based reasoning may be supported byimages of significant aesthetic values,which inspire immediate reactions.Keywords:GIS;crowd-sourcing;mash-up;education;urban planning;urban analyses摘要:不同方面的城市现象及其与物理城市景观的关系映射经由地理信息学的使用已经大大扩展了。
汽车电子系统中英文对照外文翻译文献汽车电子系统中英文对照外文翻译文献1汽车电子系统中英文对照外文翻译文献(文档含英文原文和中文翻译)The Changing Automotive Environment: High-Temperature ElectronicsR. Wayne Johnson, Fellow, IEEE, John L. Evans, Peter Jacobsen, James R. (Rick) Thompson, and Mark ChristopherAbstract —The underhood automotive environment is harsh and current trends in the automotive electronics industry will be pushing the temperatureenvelope for electronic components. The desire to place engine control unitson the engine and transmission control units either on or in the transmissionwill push the ambient temperature above 125125℃℃.However, extreme cost pressures,increasing reliability demands (10 year/241 350 km) and the cost of field failures (recalls, liability, customer loyalty) will make the shift to higher temperatures occur incrementally. The coolest spots on engine and in the transmission will be used. These large bodies do provide considerableheat sinking to reduce temperature rise due to power dissipation in the controlunit. The majority of near term applications will be at 150 ℃ or less andthese will be worst case temperatures, not nominal. The transition toX-by-wire technology, replacing mechanical and hydraulic systems with electromechanical systems will require more power electronics. Integrationof power transistors and smart power devices into the electromechanical℃ to 200℃ . Hybridactuator will require power devices to operate at 175electric vehicles and fuel cell vehicles will also drive the demand for higher temperature power electronics. In the case of hybrid electric and fuel cell vehicles, the high temperature will be due to power dissipation. Thealternates to high-temperature devices are thermal management systems which add weight and cost. Finally, the number of sensors in vehicles is increasingas more electrically controlled systems are added. Many of these sensors mustwork in high-temperature environments. The harshest applications are exhaustgas sensors and cylinder pressure or combustion sensors. High-temperature electronics use in automotive systems will continue to grow, but it will be gradual as cost and reliability issues are addressed. This paper examines themotivation for higher temperature operation,the packaging limitations evenat 125 C with newer package styles and concludes with a review of challenge at both the semiconductor device and packaging level as temperatures push beyond 125 ℃.Index Terms—Automotive, extreme-environment electronics.I. INTRODUCTIONI N 1977, the average automobile contained $110 worth of electronics [1]. By 2003 the electronics content was $1510 per vehicle and is expected to reach$2285 in 2013 [2].The turning point in automotive electronics was governmentTABLE IMAJOR AUTOMOTIVE ELECTRONIC SYSTEMSTABLE IIAUTOMOTIVETEMPERATUREEXTREMES(DELPHIDELCOELECTRONIC SYSTEMS) [3]regulation in the 1970s mandating emissions control and fuel economy. The complex fuel control required could not be accomplished using traditional mechanical systems. These government regulations coupled with increasing semiconductor computing power at decreasing cost have led to an ever increasing array of automotive electronics. Automotive electronics can be divided into five major categories as shown in Table I.The operating temperature of the electronics is a function of location, power dissipation by the electronics, and the thermal design. The automotive electronics industry defines high-temperature electronics as electronics operating above 125 ℃. However, the actual temperature for various electronics mounting locations varies considerably. Delphi Delco Electronic Systems recently published the typical continuous maximum temperatures as reproduced in Table II [3]. The corresponding underhood temperatures are shown in Fig. 1. The authors note that typical junction temperatures for integrated circuits are 10 ℃to15℃ higher than ambient or baseplate temperature, while power devices can reach 25 ℃ higher. At-engine temperatures of 125℃ peak can be maintained by placing the electronics on theintake manifold.Fig. 1. Engine compartment thermal profile (Delphi Delco Electronic Systems) [3].TABLE III THEAUTOMOTIVEENVIRONMENT(GENERALMOTORS ANDDELPHIDELCO ELECTRONICSYSTEMS) [4]TABLE IV REQUIREDOPERATIONTEMPERATURE FORAUTOMOTIVEELECTRONIC SYSTEMS(TOYOTAMOTORCORP. [5]TABLE VMECHA TRONICMAXIMUMTEMPERA TURERANGES(DAIMLERCHRYSLER,EA TONCORPORA TION, ANDAUBURNUNIVERSITY) [6]Fig. 2. Automotive temperatures and related systems (DaimlerChrysler) [8].automotive electronic systems [8]. Fig. 3 shows an actual measured transmission transmission temperature temperature temperature profile profile profile during during during normal normal normal and and excessive excessive driving drivingconditions [8]. Power braking is a commonly used test condition where the brakes are applied and the engine is revved with the transmission in gear.A similar real-world situation would be applying throttle with the emergencybrake applied. Note that when the temperature reached 135135℃℃,the over temperature light came on and at the peak temperature of 145145℃℃,the transmission was beginning to smell of burnt transmission fluid.TABLE VI2002I NTERNA TIONAL T ECHNOLOGY R OADMAPFOR S EMICONDUCTORS A MBI ENTOPERA TINGTEMPERA TURES FORHARSHENVIRONMENTS (AUTOMOTIVE) [9]The 2002 update to the International Technology Roadmap for Semiconductors (ITRS) did not reflect the need for higher operating temperatures for complex integrated circuits, but did recognize increasing temperature requirements for power and linear devices as shown in Table VI [9]. Higher temperature power devices (diodes and transistors) will be used for the power section of power converters and motor drives for electromechanical actuators. Higher temperature linear devices will be used for analog control of power converters and for amplification and some signal processing of sensor outputs prior to transmission to the control units. It should be noted that at the maximum rated temperature for a power device, the power handling capability is derated to zero. Thus, a 200℃ rated power transistor in a 200℃ environment would have zero current carrying capability. Thus, the actual operating environments must be lower than the maximum rating.In the 2003 edition of the ITRS, the maximum junction temperatures identified forharsh-environment complex integrated circuits was raised to 150℃through 2018 [9]. Theambient operating temperature extreme for harsh-environment complex integrated circuits was defined as 40℃to 125℃ through 2009, increasing to 40℃to 150℃for 2010 and beyond. Power/linear devices were not separately listed in 2003.The ITRS is consistent with the current automotive high-temperature limitations. Delphi Delco Electronic Systems offers two production engine controllers (one on ceramic and one on thin laminate) for direct mounting on the engine. These controllers are rated for operation over the temperature range of 40℃to 125℃. The ECU must be mounted on the coolest spot on the engine. The packaging technology is consistent with 140℃ operation, but the ECU is limited by semiconductor and capacitor technologies to 125℃.The future projections in the ITRS are not consistent with the desire to place controllers on-engine or in-transmission. It will not always be possible to use the coolest location for mounting control units. Delphi Delco Electronics Systems has developed an in-transmission controller for use in an ambient temperature of 140℃[10] using ceramic substrate technology. DaimlerChrysler is also designing an in-transmission controller for usewith a maximum ambient temperature of 150℃ (Figs. 4 and 5) [11].II. MECHATRONICSMechatronics, or the integration of electrical and mechanical systems offers a number ofadvantages in automotive assembly. Integration of the engine controller with the engine allows pretest of the engine as a complete system prior to vehicle assembly. Likewise with the integration of the transmission controller and the transmission, pretesting and tuning to account for machining variations can be performed at the transmission factory prior to shipment to the automobile assembly site. In addition, most of the wires connecting to a transmission controller run to the solenoid pack inside the transmission. Integration of the controller into the transmission reduces the wiring harness requirements at the automobile assembly level.Fig. 4. Prototype DaimlerChrysler ceramic transmission controller [11]Fig. 5. DaimlerChrysler in-transmission module [11].The trend in automotive design is to distribute control with network communications. As the industry moves to more X-by-wire systems, this trend will continue. Automotivefinalassembly plants assemble subsystems and components supplied by numerous vendors to build the vehicle. Complete mechatronic subsystems simplify the design, integration, management, inventory control, and assembly of vehicles. As discussed in the previous section, higher temperature electronics will be required to meet future mechatronic designs.III. PACKAGINGCHALLENGES AT125℃Trends in electronics packaging, driven by computer and portable products are resulting in packages which will not meet underhood automotive requirements at 125℃. Most notable are leadless and area array packages such as small ball grid arrays (BGAs) and quadflatpacks no-lead (QFNs). Fig. 6 shows the thermal cycle test 40 ℃to 125℃ results for two sizes of QFN from two suppliers [12]. A typical requirement is for the product to survive 2000–2500 thermal cycles with<1% failure for underhood applications. Smaller I/O QFNs have been found to meet the requirements.Fig. 7 presents the thermal cycle results for BGAs of various body sizes [13]. The die size in the BGA remained constant (8.6 *8.6 mm). As the body size decreases so does the reliability. Only the 23-mm BGA meets the requirements. The 15-mm BGA with the 0.56-mm-thick BT substrate nearly meets the minimum requirements. However, the industry trend is to use thinner BT substrates (0.38 mm) for BGA packages.One solution to increasing the thermal cycle performance of smaller BGAs is to use underfill. Capillary underfill was dispensed and cured after reflow assembly of the BGA. Fig. 8 shows a Weibull plot of the thermal cycle data for the 15-mm BGAs with four different underfills. Underfill UF1 had no failures after 5500 cycles and is, therefore, not plotted. Underfill, therefore, provides a viable approach to meeting underhood automotive requirements with smaller BGAs, but adds process steps, time, and cost to the electronics assembly process.Since portable and computer products dominate the electronics market, the packages developed for these applications are replacing traditional packages such as QFPs for new devices. The automotive electronics industry will have to continuedeveloping assembly approaches such as underfill just to use these new packages in current underhood applications.IV. TECHNOLOGY CHALLENGES ABOVE125 ℃The technical challenges for high-temperature automotive applications are interrelated, but can be divided into semiconductors, passives, substrates,interconnections, and housings/connectors. Industries such as oil well logging have successfully fielded high-temperature electronics operating at 200℃ and above. However, automotive electronics are further constrained by high-volume production, low cost, and long-term reliability requirements. The typical operating life for oil well logging electronics may only be 1000 h, production volumes are in the range of 10s or 100s and, while cost is a concern, it is not a dominant issue. In the following paragraphs, the technical challenges for high-temperature automotive electronics are discussed.Semiconductors: The maximum rated ambient temperature for most silicon basedintegrated circuits is 85℃, which is sufficient for consumer, portable, and computing product applications. Devices for military and automotive applications are typically rated to 125℃. A few integrated circuits are rated to 150℃, particularly for power supply controllers and a few automotive applications. Finally, many power semiconductor devices are derated to zero power handling capability at 200℃.Nelmset al.and Johnsonet al.have shown that power insulated-gate bipolar transistors (IGBTs) and metal–oxide–semiconductorfield-effect transistors (MOSFETs) can be used at 200℃[14], [15]. The primary limitations of these power transistors at the higher temperatures are the packaging (the glass transition temperature of common molding compounds is in the 180℃ to 200℃range) and the electrical stress on the transistor during hard switching.A number of factors limit the use of silicon at high temperatures. First, with a bandgap of 1.12 eV, the silicon p-n junction becomes intrinsic at high temperature (225℃ to 400℃depending on doping levels). The intrinsic carrier concentration is given by (1)As the temperature increases, the intrinsic carrier concentration increases. When the intrinsic carrier concentration nears the doping concentration level, p-n junctions behave as resistors, not diodes, and transistors lose their switching characteristics. One approach used in high-temperature integrated circuit design is to increase the doping levels, which increases the temperature at which the device becomes intrinsic. However, increasing the doping levels decreases the depletion widths, resulting in higher electricfields within the device that can lead to breakdown.A second problem is the increase in leakage current through a reverse-biased p-n junction with increasing temperature. Reverse-biased p-n junctions are commonly used in IC design to provide isolation between devices. The saturation current (I,the ideal reverse-bias current of the junction) is proportional to the square of the intrinsic carrier concentrationwhere Ego=bandgap energy atT= 0KThe leakage current approximately doubles for each 10℃rise in junction temperature. Increased junction leakage currents increase power dissipation within the device and can lead to latch-up of the parasitic p-n-p-n structure in complimentary metal–oxide–semiconductor (CMOS) devices. Epitaxial-CMOS (epi-CMOS) has been developed to improve latch-up resistance as the device dimensions are decreased due to scaling and provides improved high-temperature performance compared to bulk CMOS.Silicon-on-insulator (SOI) technology replaces reverse-biased p-n junctions with insulators, typically SiO2 , reducing the leakage currents and extending the operating range of silicon above 200℃. At present, SOI devices are more expensive than conventional p-njunction isolated devices. This is in part due to the limited use of SOI technology. With the continued scaling of device dimensions, SOI is being used in some high-performance applications and the increasing volume may help to eventually lower the cost.Other device performance issues at higher temperatures include gate threshold voltage shifts, decreased noise margin, decreased switching speed, decreased mobility, decreased gain-bandwidth product, and increased amplifier input–offset voltage [16]. Leakage currents also increase for insulators with increasing temperature. This results in increased gate leakage currents, and increased leakage of charge stored in memory cells (data loss). For dynamic memory, the increased leakage currents require faster refresh rates. For nonvolatile memory, the leakage limits the life of the stored data, a particular issue for FLASH memory used in microcontrollers and automotive electronics modules.Beyond the electrical performance of the device, the device reliability must also be considered. Electromigration of the aluminum metallization is a major concern. Electromigration is the movement of the metal atoms due to their bombardment by electrons (current flow). Electromigration results in the formation of hillocks and voids in the conductor traces. The mean time to failure (MTTF) for electromigration is related to the current density (J)and temperature(T) as shown in (3)The exact rate of electromigration and resulting time to failure is a function of the aluminum microstructure. Addition of copper to the aluminum increases electromigration resistance. The trend in the industry to replace aluminum with copper will improve the electromigration resistance by up to three orders of magnitude [17].Time dependent dielectric breakdown (TDDB) is a second reliability concern. Time to failure due to TDDB decreases with increasing temperature. Oxide defects, including pinholes, asperities at the Si–SiO2 interface and localized changes in chemical structure that reduce the barrier height or increase the charge trapping are common sources of early failure [18]. Breakdown can also occur due to hole trapping (Fowler–Nordheim tunneling). The holes can collect at weak spots in the Si–SiO2 interface, increasing the electricfield locally and leading to breakdown [18]. The temperature dependence of time-to-breakdown(tBD) can be expressed as [18]Values reported for Etbd vary in the literature due to its dependence on the oxidefield and the oxide quality. Furthermore, the activation energy increases with breakdown time [18].With proper high-temperature design, junction isolated silicon integrated circuits can be used to junction temperatures of 150℃ to 165℃, epi-CMOS can extend the range to 225℃to 250℃ and SOI can be used to 250℃ to 280℃ [16, pp. 224]. High-temperature, nonvolatile memory remains an issue.For temperatures beyond the limits of silicon, silicon carbidebased semiconductors are being developed. The bandgap of SiC ranges from 2.75–3.1 depending on the polytype. SiC has lower leakage currents and higher electric field strength than Si. Due to its wider bandgap, SiC can be used as a semiconductor device at temperatures over 600℃. Theprimary focus of SiC device research is currently for power devices. SiC power devices may eventuallyfind application as power devices in braking systems and direct fuel injection. High-temperature sensors have also been fabricated with SiC. Berget al.have demonstrated a SiCbased sensor for cylinder pressure in combustion engines [19] at up to 350℃ and Casadyet al.[20] have shown a SiC-based temperature sensor for use to 500℃. At present, the wafer size, cost, and device yield have made SiC devices too expensive for general automotive use. Most SiC devices are discrete, as the level of integration achieved in SiC to date is low.Passives: Thick and thin-film chip resistors are typically rated to 125 ℃. Naefeet al.[21] and Salmonet al.[22] have shown that thick-film resistors can be used at temperatures above 200℃ if the allowable absolute tolerance is 5% or greater. The resistors studied were specifically formulated with a higher softening point glass. The minimum resistance as afunction of temperature was shifted from 25℃to 150℃to minimize the temperature coefficient of resistance (TCR) over the temperature range to 300℃. TaN and NiCr thin-film resistors have been shown to have less than 1% drift after 1000 h at 200℃ [23]. Thus, for tighter tolerance applications, thin-film chip resistors are preferred. Wire wound resistors provide a high-temperature option for higher power dissipation levels [21].High-temperature capacitors present more of a challenge. For low-value capacitors, negative-positive-zero (NPO) ceramic and MOS capacitors provide low-temperature coefficient of capacitance (TCC) to 200℃. NPO ceramic capacitorshave been demonstrated to 500℃ [24]. Higher dielectric constant ceramics (X7R, X8R, X9U), used to achieve the high volumetric efficiency necessary for larger capacitor values, exhibit a significant capacitance decrease above the Curie temperature, which is typically between 125℃ to 150℃. As the temperature increases, the leakage current increases, the dissipation factor increases, and the breakdown strength decreases. Increasing the dielectric tape thickness to increase breakdown strength reduces the capacitance and is a tradeoff. X7R ceramic capacitors have been shown to be stable when stored at 200℃ [23]. X9U chip capacitors are commercially available for use to 200 C, but there is a significant decrease in capacitance above 150℃.Consideration must also be given to the capacitor electrodes and terminations. Ni is now being substituted for Ag and PdAg to lower capacitor cost. The impact of this change on hightemperature reliability must be evaluated. The surface finish for ceramic capacitor terminations is typically Sn. The melting point of the Sn (232℃) and its interaction with potential solders/brazes must also be considered. Alternate surfacefinishes may be required.For higher value, low-voltage requirements, wet tantalum capacitors show reasonable behavior at 200℃ if the hermetic seal does not lose integrity [23]. Aluminum electrolytics are also available for use to 150℃. Mica paper (260℃) and Teflonfilm (200℃) capacitors can provide higher voltage capability, but are large and bulky [25]. High-temperature capacitors are relatively expensive. V capacitors are relatively expensive. Volumetrically efficient, high-voltage, highcapacitance, olumetrically efficient, high-voltage, highcapacitance, high-temperature and low-cost capacitors are still needed.Standard transformers and inductor cores with copper wire and teflon insulation are suitable for operation to 200℃. For higher temperature operation, the magnetic core, the conductor metal (Ni instead of Cu) and insulator must be selected to be compatible with the higher temperatures [16, pp. 651–652] Specially designed transformers can be used to 450℃ to 500℃, however, they are limited in operating frequency.Crystals are required for clock frequency generation for microcontrollers. Crystals with acceptable frequency shift over the temperature range from 55℃to 200℃ have been demonstrated [22]. However, the selection of packaging materials and assembly process for the crystal are key to high-temperature performance and reliability. For example, epoxies used in assembly must be compatible with 200℃ operation.Substrates: Thick-film substrates with gold metallization have been used in circuits to 500℃ [21], [23]. Palladium silver, platinum silver, and silver conductors are morecommonly used in automotive hybrids for reduced cost. Silver migration has been observed with an unpassivated PdAg thick-film conductor under bias at 300℃ [21]. The time-to-failure needs to be examined as a function of temperature and bias voltage with and without passivation. Low-temperature cofired ceramic (LTCC) and high-temperature cofired ceramic (HTCC) are also suitable for high-temperature automotive applications. Embedded resistors are standard to thick-film hybrids, LTCC, and some HTCC technologies. As previously mentioned, thick-film resistors have been demonstrated at temperatures 200℃. Dielectric tapes for embedded capacitors have also been developed for LTCC and HTCC. However, these embedded capacitors have not been characterized for high-temperature use.High-Tg laminates are also available for fabrication of hightemperature printed wiring boards. Cyanate esters [Tg=250℃by differential scanning calorimetry (DSC)], polyimide (260℃by DSC), and liquid crystal polymers(Tm>280℃)provide options for use to 200℃. Cyanate ester boards have been used successfully in test vehicles at 175℃, but failed when exposed to 250℃ [26]. The higher coefficient of thermal expansion (CTE) of the laminate substrates compared to the ceramics must be considered in the selection of component attachment materials. The temperature limits of the laminates with respect to assembly temperatures must also be carefully considered. Work is ongoing to develop and implement embedded resistor and capacitor technology for laminate substrates for conventional temperature ranges. This technology has not been extended to high-temperature applications.One method many manufacturers are using to address the higher temperatures whilemaintaining lower cost is the use of laminate substrates attached to metal. The typical design involves the use of higher Tg( +140℃ and above) laminate substrates attached to an aluminum plate (approximately 2.54-mm thick) using a sheet or liquid adhesive. To assist in thermal performance, the laminate substrate is often thinner (0.76 mm) than traditional automotive substrates for under-the-hood applications. While this design provides improved thermal performance, the attachment of the laminate to aluminum increases the CTE for the overall substrates. The resultant CTE is very dependent on the ability of the attachment material to decouple the CTE between the laminate substrate and the metal backing. However, regardless of the attachment material used, the combination of the laminate and metal will increase the CTE of the overall substrate above that of a stand-alone laminate substrate. This impact can be quite significant in the reliability performance for components with low CTE values (such as ceramic chip resistors). Fig. 9 illustrates the impact of two laminate-to-metal attachment options compared to standard laminate substrates [27], [28]. The reliability data presented is for 2512 ceramic chip resistors attached to a 0.79-mm-thick laminate substrate attached to aluminum using two attachment materials. Notice that while one material significantly outperforms the other, both are less reliable than the same chip resistor attached to laminate without metal backing.This decrease in reliability is also exhibited on small ball grid array (BGA) packages. Fig. 10 shows the reliability of a 15-mm BGA package attached to laminate compared to the same package attached to a laminate substrate with metal backing [27], [28]. The attachment material used for the metal-backed substrate was the best material selected from previous testing. Notice again that the metal-backed substrate deteriorates the reliability. This reliability deterioration is of particular concern since many IC packages used for automotive applications are ball grid array packages and the packaging trend is for reduced packaging size. These packaging trends make the use of metal-backed substrates difficult for next generation products.One potential solution to the above reliability concern is the use of encapsulants and underfills. Fig. 11 illustrates how conformal coating can improve component reliability for surface mount chip resistors [27], [28]. Notice that the reliability varies greatly depending on material composition. However, for components which meet a marginal level of reliability, conformal coatings may assist the design in meeting the target reliability requirements. The same scenario can be found for BGA underfills. Typical underfill materials may extend the component life by a factor of two or more. For marginal IC packages, this enhancement may provide enough reliability improvement toall the designs to meet under-the-hood requirements. Unfortunately, the improvements provided byencapsulants and underfills increase the material cost and adds one or more manufacturing processes for material dispense and cure.Interconnections: Methods of mechanical and electrical interconnection of the active and passive components to the board include chip and wire,flip-chip, and soldering of packaged parts. In chip and wire assembly, epoxy die-attach materials can beused to 165℃ [29]. Polyimide and silicone die-attach materials can be used to 200℃. For higher temperatures, SnPb ( >90Pb), AuGe, AuSi, AuSn, and AuIn have been used. However,with the exception of SnPb, these are hard brazes and with increasing die size, CTE mismatches between the die and the substrate will lead to cracking with thermal。
中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:The choice of payment method in European M & A Global M&A activity has grown dramatically over the last ten years, bringing with it major changes in the organization and control of economic activity around the world. Yet, there is much about the M&A process that we do not fully understand, including the choice of payment method. Given the large size of many M&A transactions, the financing decision can have a significant impact on an acquirer’s ownership structure, financial leverage, and subsequent financing decisions. The financing decision can also have serious corporate control, risk bearing, tax and cash flow implications for the buying and selling firms and shareholders.In making an M&A currency decision, a bidder is faced with a choice between using cash and stock as deal consideration. Given that most bidders have limited cashand liquid assets, cash offers generally require debt financing. As a consequence, a bidder implicitly faces the choice of debt or equity financing, which can involve a tradeoff between corporate control concerns of issuing equity and rising financial distress costs of issuing debt. Thus, a bidder’s M&A currency decision can be strongly influenced by its debt capacity and existing leverage. It can also be strongly influenced by management’s desire to maintain the existing corporate governance structure. In contrast, a seller can be faced with a tradeoff between the tax benefits of stock and the liquidity and risk minimizing benefits of cash consideration. For example, sellers may be willing to accept stock if they have a low tax basis in the target stock and can defer their tax liabilities by accepting bidder stock as payment. On the other hand, sellers can prefer cash consideration to side step the risk of becoming a minority shareholder in a bidder with concentrated ownership, thereby avoiding the associated moral hazard problems. Unfortunately, due to data limitations, this seller trade off can not be easily measured.Under existing theories of capital structure, debt capacity is a positive function of tangible assets, earnings growth and asset diversification and a negative function of asset volatility. Firms with greater tangible assets can borrow more privately from banks and publicly in the bond market. Since larger firms are generally more diversified, we expect them to have a lower probability of bankruptcy at a given leverage ratio and thus, greater debt capacity. These financing constraint and bankruptcy risk considerations can also reduce a lenders willingness to finance a bidder’s cash bid, especially in relatively large deals.In assessing potential determinants of an M&A payment method, our focus is on a bidder’s M&A financing choices, recognizing that targets can also influence the final terms of an M&A deal. However,if a target’s financing choice is unacceptable to the bidder, then the proposed M&A transaction is likely to be aborted or else the bidder can make a hostile offer on its own terms. For a deal to succeed, the bidder must be satisfied with the financial structure of the deal.Bidder and target considerations:* Corporate ControlBidders controlled by a major shareholder should be reluctant to use stock financing when this causes the controlling shareholder to risk losing control. Assuming control is valuable, the presence of dominant shareholder positions should be associated with more frequent use of cash, especially when the controlling shareholder’s position is threatened. To capture this effect, we use the ultimate vo ting stake held by the largest controlling shareholder.A bidder with diffuse or highly concentrated ownership is less likely to be concerned with corporate control issues. In line with this argument, Martin (1996) documents a significantly negative relationship between the likelihood of stock financing and managerial ownership only over the intermediate ownership range. Therefore, we incorporate the possibility of a non-linear relationship between the method of payment and the voting rights of a bidder’s controlling shareholder by estimating both a linear and cubic specification for the ultimate voting control percentage of the bidder’s largest shareholder. In our robustness analysis, we also estimate a spline function for this variable.Corporate control concerns in M&A activity can manifest themselves in more subtle ways. Concentrated ownership of a target means that a stock financed acquisition can create a large blockholder, threatening the corporate governance of the acquirer. If the seller is closely held or is a corporation disposing of a division, then ownership concentration tends to be very concentrated. This implies that financing the M&A deal with stock can create a new blockholder in the bidder. While the risk of creating a new bidder blockholder with stock financing is higher when a target has a concentrated ownership structure, this is especially ture when relative size of the deal is large. To capture the risk of creating a large blockholder when buying a target with stock financing, we employ CONTROL LOSS, the product between the target’s contr ol block and the deal’s ralative size. The relative deal size is computed as the ratio of offer size (excluding assumed liabilities) to the sum of a bidder’s equity pre-offer capitalization plus the offe r size. The target’s controlling blockholder is assumed to have 100 % ownership for unlisted targets and subsidiary targets.* Collateral, Financial Leverage and Debt CapacityWe use the fraction of tangible assets as our primary measure of a bidder’s ability to pay cash, financed from additional borrowing. COLLATERAL is measured by the ratio of property, plant and equipment to book value of total assets. Myers (1977) argues that debtholders in firms with fewer tangible assets and more growth opportunities are subject to greater moral hazard risk, which increases the cost of debt, often making stock more attractive. Hovakimian, Opler and Titman(2001) find that a firm’s percentage of tangible assets has a strong positive influence on its debt level.We also control for a bidder’s financial condition with its leverage ratio, FIN’L LEVERAGE. Since cash is primarily obtained by issuing new debt, highly levered bidders are constrained in their ability to issue debt and as a consequence use stock financing more fr equently. A bidder’s financial leverage is measured by the sum of the bidder’s face value of debt prior to the M&A announcement plus the deal value (including assumed liabilities)divided by the sum of the book valve of total assets prior to the announcement plus the deal value (including assumed liabilities). This captures the bidder’s post-deal leverage if the transaction is debt financed. This measure differs from Martin(1996) who uses a pre-deal bidder leverage measure adjusted for industry mean and reports an insignificant effect.Bidder size is likely to influence its financing choices. Larger firms are more diversified and thus, have proportionally lower expected bankruptcy costs. They also have lower flotation costs and are likely to have better access to debt markets, making debt financing more readily available. Thus, cash financing should be more feasible in the case of larger firms. Larger firms are also more apt to choose cash financing in smaller deals due to its ease of use, provided they have sufficient unused debt capacity or liquid assets. Further, the use of cash allows the bidder to avoid the significant costs of obtaining shareholder approval of pre-emptive rights exemptions and authorizations and the higher regulatory costs of stock offers. We measure bidder assets size by the log of pre-merger book value of assets in dollars(total assets). In addition to bidder control and financing considerations, we need to take into account several other bidder characteristics.* Relative Deal Size, Bidder Stock Price Runup and Asymmetric InformationHansen (1987) predicts that bidders have greater incentives to finance with stock when the asymmetric information about target assets is high. This information asymmetry is likely to rise as target assets rise in value relative to those of a bidder. Yet, stock is used in relatively larger deals, it produces more serious dilution of a dominant shareholder’s control position. Finally, as bidder equity capitalization rises, concern about its financing constraint falls, since there is a relatively smaller impact on its overall financial conditon. We proxy for these effects with REL SIZE, which is computed as the ratio of deal offer size (excluding assumed liabilities)divided by the sum of the deal’s offer size plus the bidder’s pre-offer market capitalization at the year-end prior to the bid.Both Myers and Majluf (1984) and Hansen (1987) predict that bidders will prefer to finance with stock when they consider their stock overvalued by the market and prefer to finance with cash when they consider their stock undervalued. As uncertainty about bidder asset value rises, this adverse selection effect is exacerbated. Martin (1996) finds evidence consistent with this adverse selection prediction. For a sample of publicly traded targets, Travlos (1987) finds that stock financed M&A deals exhibit much larger negative announcement effects than cash financed deals. He concludes this is consistent with the empirical validity of an adverse selection effect. We use as a proxy for bidder overvaluation (or undervaluation), calculated from a bidder’s buy and hold cumulative stock return over the year preceding the M&A announcement month.In addition to bidder considerations, we need to take into account typical target considerations. These preferences are related to risk, liquidity, asymmetric information and home bias.T1. Unlisted Targets and Subsidiary TargetsWe use an indicator variable, UNLISTED TARGET, to control for listing status where the variable takes a value of one if the target is a stand-alone company, not listed on any stock exchange and is zero for listed targets and unlisted subsidiaries. When an M&A deal involves an unlisted target, a seller’s consumption/liquidity needs are also likely to be important considerations. These sellers are likely to prefer cashgiven the illiquid and concentrated nature of their portfolio holdings and the often impending retirement of a controlling shareholder-manager. Likewise, corporations selling subsidiaries are often motivated by financial distress concerns or a desire to restructure toward their core competency. In either case, there is a strong preference for cash consideration to realize these financial or asset restructuring goals. A likely consequence is a greater use of cash in such deals, since bidders are frequently motivated to divest subsidiaries to finance new acquisitions or reduce their debt burden. As noted earlier, these two target ownership structures are also likely to elicit bidder corporate control concerns given their concentrated ownership. Thus, bidders are likely to prefer cash financing of such deals, especially as they become relatively large.T2. Cross-Industry Deals and Asymmetric InformationSeller reluctance to accept bidder stock as payment should rise as the asymmetric information problem worsens with greater uncertainty about bidder equity value and future earnings. This problem is also likely to be more serious for conglomerate mergers. In contrast, sellers are more apt to accept a continuing equity position in an intra–industry merger, where they are well acquainted with industry risks and prospects.T3. Cross-Border Deals, Local Exchange Listing and Home BiasIn cross border deals, selling stock to foreign investors can entail several problems. We are concerned with the possibility that investors have a home country bias in their portfolio decisions as documented in Coval and Moskowitz (1999), French and Poterba (1991) and Grinblatt and Keloharju(2001), among others. This can reflect a foreign stock’s g reater trading costs, lower liquidity, exposure to exchange risk and less timely, more limited access to firm information.T4. Bidder Investment OpportunitiesHigh growth bidders can make an attractive equity investment for selling shareholders. MKTTO-BOOK, defined as a market value of equity plus book value of debt over the sum of book value of equity plus book value of debt prior to the bid, measures a bidder’s investment in growth opportunities.We expect a higher market tobook ratio to increase a bidde r stock’s attractiveness as M&A consideration. High market to book is also correlated with high levels of tax deductible R&D expenditures, along with low current earnings and cash dividends. These firm attributes lower a bidder’s need for additional debt tax shield, making cash financing less attractive. These attributes are also attractive to high income bracket sellers due to their tax benefits. Jung, Kim and Stulz (1996) document a higher incidence of stock financing for higher market to book buyers.译文:并购支付方式在欧洲的选择在过去的十年,全球并购活动已显著增长,同时带来组织的重大改变和在世界各地的经济活动的控制。
文学作品中英文对照外文翻译文献
本文旨在汇总文学作品中的英文和中文对照外文翻译文献,共有以下几篇:
1. 《傲慢与偏见》
翻译:英文原版名为“Pride and Prejudice”,中文版由钱钟书翻译。
该小说是英国作家简.奥斯汀的代表作之一,描绘了19世纪英国中上层社会的生活和爱情故事。
2. 《了不起的盖茨比》
翻译:英文原版名为“The Great Gatsby”,中文版由杨绛翻译。
小说主要讲述了一个居住在纽约长岛的年轻白领盖茨比为了追求他的旧爱黛西而付出的努力,是20世纪美国文学的经典之作。
3. 《麦田里的守望者》
翻译:英文原版名为“The Catcher in the Rye”,中文版由施蛰存翻译。
该小说主人公霍尔顿是美国现代文学中最为知名的反英雄形象之一,作品深刻地揭示了青少年内心的孤独和矛盾。
4. 《1984》
翻译:英文原版名为“1984”,中文版由李敬瑞翻译。
该小说是英国作家乔治.奥威尔的代表作之一,描绘了一个虚构的极权主义社会。
以上是部分文学作品的中英文对照外文翻译文献,可以帮助读者更好地理解和学习相关文学作品。
道路路桥工程中英文对照外文翻译文献中英文资料中英文资料外文翻译(文档含英文原文和中文翻译)原文:Asphalt Mixtures-Applications。
Theory and Principles1.ApplicationsXXX is the most common of its applications。
however。
and the onethat will be XXX.XXX “flexible” is used to distinguish these pavements from those made with Portland cement,which are classified as rigid pavements。
that is。
XXX it provides they key to the design approach which must be used XXX.XXX XXX down into high and low types,the type usually XXX product is used。
The low typesof pavement are made with the cutback。
or emulsion。
XXX type may have several names。
However。
XXX is similar for most low-type pavements and XXX mix。
forming the pavement.The high type of asphalt XXX中英文资料XXX grade.中英文资料Fig.·1 A modern XXX.Fig.·2 Asphalt con crete at the San Francisco XXX.They are used when high wheel loads and high volumes of traffic occur and are。
(文档含英文原文和中文翻译)中英文翻译1外文参考文献译文the well-known trademarks and dilute anti-diluted First, well-known trademarks SummaryWell-known trademarks is a long-term use, in the market enjoy a high reputation, known for the relevant public and by certain procedures that the trademark. Since the "Paris Convention" was first introduced the concept of well-known trademarks, the well-known trademarks for special protection legislation has become the world trend.Paris Convention stipulates: all of the members were identified as the well-known trade marks, or registered First, the first to ban others, and the other is to prohibit the use of others with identical or similar logo. Trips further provides: 1, the Paris Convention for the special protection and extension of the services of well-known trademarks, 2, the scope of protection does not extend to prohibit similar goods or services with the well-known trademarks for use on the same or similar logo,3, on how to That a well-known trademarks in principle a simple requirement.National legislation on the practice, the well-known trade marks that standards vary, often based on specific trade mark promotion of public awareness of related areas, logo merchandise sales and the scope of national interests, and other factors identified. From an international treaty to protect the well-known trademarks mind, that well-known trade marks and protection of well-known trade marks are closely linked.Second, the well-known trademarks protected modeOn the protection of the main trademarks of relative and absolute protectionism two models.The former refers to ban others with well-known trademarks identical or similar trademark with the trademark owner the same or similar industries in the registration or use of similar goods in non-use of the same or similar trademarks is permitted, "the Paris Convention "That is, relative to protectionism.While the latter refers to ban others in any industry, including the well-known trade mark goods with different or similar to those in the industry to register with the well-known trade marks and the use of the same or similar trademarks, TRIPS agreement that is taken by the expansion of the absolute protectionism.In simple economic form, as specified by the trade mark goods at a single, specific trade mark goods and the link between more closely. With, a valuable well-known trademarks have been more and more use of different types of commodities, which are among the types of goods on the property may be totally different, in a trademark associated with the commodity groups and the relative weakening of trade marks Commodity producers and the relative isolation. Not well-known trademarks such as cross-category protection and allow others to register, even if the goods obvious differences, the public will still be in the new goods and reputable well-known trademarks to establish a link between people that the goods may be well-known trademark, the new commodities , Or the well-known trademarksof goods and people between the existence of a legal, organizational or business association, thus leading to the misuse of consumers purchase. The rapid development of the commodity today, the relative protectionism has not improved the protection of the public and well-known trademark owner's interests.In view of this, in order to effectively prevent the reputation of well-known trademarks, and the identification of significant features and advertising value by the improper use of the damage, many countries on the implementation of a well-known trademarks is protectionism, which prohibits the use of any products on the same or with the well-known trademarks Similar to the trademark.TRIPS Agreement Article 16, paragraph 3 states: Paris Convention 1967 text, in principle, applicable to the well-known trademarks and logos of the commodities or services are not similar goods or services, if not similar goods or services on the use of the trademark will be Suggest that the goods or services with the well-known trademarks on a link exists, so that the interests of all well-known trademarks may be impaired.Third, the well-known trademarks dilutedThe protection of trademark rights, there are mainly two: one for the confusion theory, a theory for desalination.The main traditional trademark protection for trade marks the difference between functional design, and its theoretical basis for the theory of confusion. In summary, which is to ensure that the trademark can be identification, confirmation and different goods or services different from the significant features, to avoid confusion, deception and E Wu, the law gives first use of a person or persons registered with exclusive rights, which prohibits any Without the permission of the rights to use may cause confusion among consumers in the same or similar trademarks. Clearly, the traditional concept of trademark protection, to stop "the possibility of confusion" is the core of trademark protection.With the socio-economic development and commercialization of the continuousimprovement of the degree, well-known trademarks by the enormous implication for the growing commercial value have attracted the attention of people. Compared with ordinary marks, bearing well-known trademarks by the significance and meaning beyond the trademark rights to the general, and further symbol of product quality and credit, contains a more valuable business assets - goodwill. Well-known trade mark rights of people to use its excellent reputation of leading the way in the purchasing power, instead of the use of trademarks to distinguish between different products and producers.When the mark beyond the role of this feature to avoid confusion, then, this factor is obviously confused and can not cover everything, and other factors become as important as or more important. Thus, in theory confusion on the basis of further development of desalination theory.Trademark Dilution (dilution), also known as trademark dilution, is one of trademark infringement theory. "Watered down", according to the U.S. "anti-federal trademark law dilute" means "regardless of well-known trade mark rights and the others between the existence of competition, or existence of confusion, misunderstanding or the possibility of deception, reduce and weaken the well-known trademarks Its goods or services and the identification of significant capacity of the act. " In China, some scholars believe that "refers to dilute or weaken gradually weakened consumer or the public will be trademarks of the commercial sources with a specific link between the ability." Trademark faded and that the main theory is that many market operators have Using well-known trademarks of the desire of others, engage in well-known trademarks should be to prevent others from using its own unique identification of special protection.1927, Frank • Si Kaite in the "Harvard Law reviews" wrote the first trademark dilute theory. He believes that people should not only be trademarks of others prohibit the use of the mark, he will compete in the commodity, and should prohibit the use of non-competitive goods on. He pointed out: the real role of trade marks, not distinguish between goods operators, but satisfied with the degree of differencebetween different commodities, so as to promote the continuous consumer purchase. From the basic function of trademarks, trade mark used in non-competitive goods, their satisfaction with regard to the distinction between the role of different commodities will be weakened and watered down. Trademarks of the more significant or unique, to the public the impression that the more deeply, that is, should be restricted to non-compete others in the use of goods or services.Since then, the Intellectual Property Rights Branch of the American Bar Association Chairman Thomas • E • Si Kaite Smith on the theory made a further elaboration and development. He said: "If the courts allow or laissez-faire 'Rolls Royce' restaurants, 'Rolls-Royce' cafeteria, 'Rolls-Royce' pants, 'Rolls-Royce' the candy, then not 10 years, ' Rolls-Royce 'trademark owners will no longer have the world well-known trademarks. "Si Kaite in accordance with the theory of well-known trade marks have faded because of the effect of non-rights holders with well-known trademarks in the public mind the good image of well-known trademarks will be used in non-competitive goods, so as to gradually weaken or reduce the value of well-known trademarks, That is, by the well-known trademarks have credibility. Trademark tag is more significant or unique characteristics, which in the public mind the impression that the more deep, more is the need for increased protection, to prevent the well-known trade marks and their specific goods was the link between the weakening or disappearance.In practice, trademarks diluted share a wide range of operating methods, such as:A well-known trademarks of others will still use as a trademark, not only in the use of the same, similar to the goods or services. For example, household appliances, "Siemens" trademark as its own production of the furniture's trademark.2. To other people's well-known trademarks as their corporate name of the component. Such as "Haier" trademark for the name of his restaurant.3. To the well-known trademarks of others as the use of domain names. For example, watches trademark "OMEGA" registered the domain name for themselves().4. To the well-known trademarks of others as a commodity and decorating use.5. Will be others as well-known trade marks of goods or services using the common name. For example, "Kodak" interpreted as "film, is a camera with photographic material", or "film, also known as Kodak,……" This interpretation is also the mark of the water down. If the "Kodak" ignored the trademark owner, after a period of time, people will Kodak film is, the film is Kodak. In this way, the Kodak film-related goods has become the common name, it as a trademark by a significant, identifiable on limbo. The public well-known Jeep (Jeep), aspirin (Aspirin), freon (Freon), and so was the registration of foreign goods are due to improper use and management and the protection of poor, evolved into similar products common name, Thus lost its trademark logo features.U.S. "anti-diluted Federal trademark law" before the implementation of the Federal Court of Appeal through the second from 1994 to 1996 case, identified the following violations including the Trademark Dilution: (1) vague, non-means as others in similar goods not on Authorized the use of a trademark so that the sales of goods and reduce the value of trademarks or weakened (2) pale, that is because of violations related to the quality, or negative, to demonize the acts described a trademark goods may be caused to others The negative effects of the situation, (3) to belittle, or improperly changed, or derogatory way to describe a trade mark case.The majority of our scholars believe that the well-known trademarks diluted There are two main forms: watered down and defaced. The so-called dilute the people will have no right to use the same or similar trademark with the well-known trademarks used in different types of commodities, thus making the mark with the goods weakened ties between the specific acts the so-called defaced is that people will have no right to use the same Or similar marks for the well-known trade marks will have to belittle good reputation, tarnished the role of different types of goods on the act.Some scholars believe that the desalination also refers to the three aspects of well-known trademarks damage. First, in a certain way to demonize the relevant well-known trademarks; Second, some way related to well-known trademark dark; Third is the indirect way so that consumers will distort trade mark goods for the general misunderstanding of the name.In general, can be diluted in the form summarized as follows:1, weakeningWeakening is a typical diluted form, also known as dark, is that others will have some visibility in the use of a trademark is not the same, similar to the goods or services, thereby weakening the mark with its original logo of goods or services The link between, weakening the mark was a significant and identifiable, thus bearing the trade mark by the damage caused by acts of goodwill. Weakening the mark of recognition of the significant damage is serious, it can be the recognition of trademark dilution, was significant, or even make it completely disappeared, then to the mark by carrying the reputation of devastating combat.First, the weakening of the identification is the weakening and lower. Any unauthorized person, others will have some visibility in the use of a trademark is not the same, similar to the goods or services, will reduce its recognition of. But consumers were referred to the mark, it may no longer think of first is the original goods or services, not only is the original or goods or services, consumers simply will not even think of goods or services, but the Trademark Dilution of goods Or services. There is no doubt that this marks the recognition of, is a heavy blow.Weakening of the mark is significantly weakened and the lower. Mark is significantly different from other commercial trademark marked characteristics. A certain well-known trademarks, which in itself should be a very significant, very significant and can be quickly and other signs of its own separate. However, the Trademark Dilution of the same or similar trademarks used in different goods or services, so that was the trademark and other commercial marked difference in greatlyreduced, to the detriment of its significant.Of course, regardless of the weakening of the mark was a significant or identifiable, are the ultimate impact of the mark by the bearer of goodwill. Because the trade mark is the carrier of goodwill, the mark of any major damage, the final performance for all bearing the trade mark by the goodwill of the damage.2, tarnishedMeans others will have some well-known trademarks in the use of the good reputation of the trademark will have to belittle, defaced role of the goods or services on the act. Contaminate the trademarks of others, is a distortion of trade marks to others, the use of the damage, not only reduced the value of the mark, even on such values were defaced. As tarnished reputation is a trademark of damage, so tarnished included in the diluted acts, is also relatively accepted view. Moreover, in the field of trademark faded, tarnished than the weakening of the danger of even greater acts, the consequences are more serious.3, degradationDegradation is due to improper use of trademarks, trade mark goods for the evolution of the common name recognition and loss of function. Trademark Dilution degradation is the most serious kind. Degradation of the event, will completely lose their identification marks, no longer has the distinction function as the common name of the commodity.Fourth, protection against diluteBased on the well-known trademarks dilute the understanding, and accompanied by a serious weakening of well-known trademarks, all countries are gradually legislation to provide for the well-known trademarks to protect anti-diluted. There are specific models:1, the development of special anti-dilute the protection of well-known trademarksThe United States is taking this protection on behalf of the typical pattern.1995, in order to prevent lower dilute "the only representative of the public eye, the unique image of the trademark" to protect "the trademark value of advertising," the U.S. Congress passed the National reunification of the "anti-federal trademark law watered down", so as to the well-known trademarks All provide the unified and effective national anti-dilute the protection.U.S. anti-diluted in trademark protection has been added a new basis for litigation, which is different from the traditional basis of trademark infringement litigation. Trademark infringement of the criteria is confusing, the possibility of deception and misleading, and the Trademark Dilution criteria is unauthorized to others well-known trademarks of the public to reduce the use of the trademark instructions for goods and services only and in particular of Feelings. It is clear that the U.S. law is anti-diluted basis, "business reputation damage" and the possibility of well-known trade mark was a significant weakening of the possibility of providing relief. Moreover, anti-faded law does not require the application of competitive relations or the existence of possible confusion, which is more conducive to the exercise of trademark right to appeal.2, through the Anti-Unfair Competition Law ProtectionSome countries apply anti-unfair competition law to protect famous trademarks from being watered down. Such as Greece, "Anti-Unfair Competition Law," the first one: "Prohibition of the Use of well-known trademarks in order to take advantage of different commodities on the well-known trademarks dilute its credibility was significant." Although some countries in the Anti-Unfair Competition Law does not explicitly prohibits trademark faded, but the Trademark Dilution proceedings, the application of unfair competition litigation.3, through or under well-known trademark protection within the scope of trademark protectionMost civil law countries is this way. 1991, "the French Intellectual PropertyCode," Di Qijuan trademark law section L.713-5 of the provisions that: not in similar goods or services on the use of well-known trade marks to the trademark owner or a loss caused by the improper use of trademarks , Against people should bear civil liability.Germany in 1995, "the protection of trademarks and other signs of" Article 14 also stipulates that: without the consent of the trademark rights of third parties should be banned in commercial activities, in and protected by the use of the trademark does not like similar goods or services , And the use of the trademark identical or similar to any signs.4, in the judicial precedents in the application of anti-dilute the protection ofIn some countries there are no clear legislative provisions of the anti-dilute well-known trademarks, but in judicial practice, they are generally applicable civil law on compensation for the infringement of the debt to protect the interests of all well-known trademarks, through judicial precedents to dilute the protection of applicable anti.China's well-known trademarks in the protection of the law did not "water down" the reference, but on the substance of the relevant legal provisions, protection of anti-diluted. 2001 "Trademark Law" amendment to increase the protection of well-known trademarks, in particular, it is important to the well-known trademarks have been registered to conduct cross-category protection. Article 13 stipulates: "The meeting is not the same as or similar to the trademark application for registration of goods is copied, Mofang, translation others have been registered in the well-known trademarks, misleading the public, the standard of the well-known trade mark registration may be the interests of the damage, no registration And can not be used. "But needs to be pointed out that this provision does not mean that China's laws for the well-known trademarks has provided an effective anti-dilute the protection. "Trademark Law" will prohibit only well-known trademarks and trademarks of the same or similar use, without the same or similar goods not on the behavior, but thewell-known trade marks have faded in various forms, such as the well-known trademarks for names, domain names, such acts Detract from the same well-known trademarks destroyed the logo of the ability to make well-known trade mark registration of the interests of damage, this is not a legal norms.It must be pointed out that the trade mark that should be paying attention to downplay acts of the following:1, downplay acts are specifically for the well-known registered trade marks.Perpetrators diluted one of the main purpose is the free-rider, using the credibility of well-known trademarks to sell their products, and general use of trademarks do not have this value. That acts to dilute limited to well-known trademarks, can effectively protect the rights of trademark rights, have not excessively restrict the freedom of choice of logo, is right to resolve the conflict right point of balance. "Trademark Law" will be divided into well-known trademarks have been registered and unregistered, and give different protection. Anti-has been watered down to protect only against the well-known trade marks registration, and for China not only well-known trade marks registered in the same or similar ban on the registration and use of goods. This reflects the "Trademark Law" the principle of protection of registered trademarks.2, faded in the different categories of goods and well-known trademarks for use on the same or similar logo.If this is the same or similar goods with well-known trademarks for use on the same or similar to the logo should be in accordance with the general treatment of trademark infringement. There is also a need to downplay the use of the tags are similar to a well-known trademarks and judgments.3, not all the non-use of similar products on the well-known trade marks and logos of the same or similar circumstances are all faded.When a trademark has not yet become well-known trademarks, perhaps there aresome with the same or similar trademarks used in other types of goods on. In the well-known trademarks, the original has been in existence does not constitute a trademark of those who play down.4, acts that play down the perpetrator does not need to consider the subjective mental state.Regardless of their out of goodwill or malicious, intentional or fault, is not watered down the establishment. But the acts of subjective mental state will assume responsibility for its impact on the manner and scope. Generally speaking, if the perpetrator acts intentionally dilute the responsibility to shoulder much weight, in particular, bear a heavier responsibility for damages, if the fault is the commitment will be less responsibility. If there are no mistakes, just assume the responsibility to stop infringement.5, due to anti-faded to protect well-known trade marks with a specific goods or services linked to well-known trademarks a long time widely used in a variety of goods, will inevitably lead to trademark the logo of a particular commodity producers play down the link, well-known trademarks A unique attraction to consumers will also be greatly reduced. So that should not be watered down to conduct a source of confusion for the conditions of goods, after all, not all the water down will cause consumers confusion. For example, a street shop's name is "Rolls-Royce fruit shop," people at this time there will be no confusion and that the shop and the famous Rolls-Royce trademark or producers of the contact. However, such acts can not be allowed, a large number of similar acts will dilute the Rolls-Royce trademark and its products linked to undermine the uniqueness of the trademark, if things continue this way when the mention of Rolls-Royce trademark, people may think of is not only Automobile, food, clothing, appliances, etc.. That faded as to cause confusion for the conditions, some will not dilute norms and suppression of acts, makes well-known trade marks are not well protected. Therefore, as long as it is a well-known trademark detract from the logo and unique ability to act on the behavior should be identified as diluted.1. Zheng Chengsi: "Intellectual property law", legal publishers 2003 version.2. Wu Handong editor: "Intellectual Property Law," China Politics and Law University Press 2002 edition.3. Susan. Sela De: "The United States Federal trademark law dilute the anti-legislation and practice," Zhang Jin Yi, contained in the "Law on Foreign Translation" 1998 No.4.4. Kong Xiangjun: "Anti-Unfair Competition AFP theory," People's Court Press, 2001 edition.5. Liu Ping, Qi Chang: "On the special protection of famous trademarks", in "law and commercial" 1998 No.6.6. Well-Tao, Lu Zhou Li: "On the well-known trademarks to protect the anti-diluted", in "Law" 1998 No. 5.2 外文参考文献原文浅谈驰名商标之淡化与反淡化一、驰名商标概述驰名商标是指经过长期使用,在市场上享有较高声誉,为相关公众所熟知,并经一定程序认定的商标。
企业风险管理中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Risk ManagementThis chapter reviews and discusses the basic issues and principles of risk management, including: risk acceptability (tolerability); risk reduction and the ALARP principle; cautionary and precautionary principles. And presents a case study showing the importance of these issues and principles in a practical management context. Before we take a closer look, let us briefly address some basic features of risk management.The purpose of risk management is to ensure that adequate measures are taken to protect people, the environment, and assets from possible harmful consequences of the activities being undertaken, as well as to balance different concerns, in particular risks and costs. Risk management includes measures both to avoid the hazards and toreduce their potential harm. Traditionally, in industries such as nuclear, oil, and gas, risk management was based on a prescriptive regulating regime, in which detailed requirements were set with regard to the design and operation of the arrangements. This regime has gradually been replaced by a more goal-oriented regime, putting emphasis on what to achieve rather than on the means of achieving it.Risk management is an integral aspect of a goal-oriented regime. It is acknowledged that risk cannot be eliminated but must be managed. There is nowadays an enormous drive and enthusiasm in various industries and in society as a whole to implement risk management in organizations. There are high expectations that risk management is the proper framework through which to achieve high levels of performance.Risk management involves achieving an appropriate balance between realizing opportunities for gain and minimizing losses. It is an integral part of good management practice and an essential element of good corporate governance. It is an iterative process consisting of steps that, when undertaken in sequence, can lead to a continuous improvement in decision-making and facilitate a continuous improvement in performance.To support decision-making regarding design and operation, risk analyses are carried out. They include the identification of hazards and threats, cause analyses, consequence analyses, and risk descriptions. The results are then evaluated. The totality of the analyses and the evaluations are referred to as risk assessments. Risk assessment is followed by risk treatment, which is a process involving the development and implementation of measures to modify the risk, including measures designed to avoid, reduce (“optimize”), transfer, or retain the risk. Risk transfer means sharing with another party the benefit or loss associated with a risk. It is typically affected through insurance. Risk management covers all coordinated activities in the direction and control of an organization with regard to risk.In many enterprises, the risk management tasks are divided into three main categories: strategic risk, financial risk, and operational risk. Strategic risk includes aspects and factors that are important for the e nterprise’s long-term strategy and plans,for example mergers and acquisitions, technology, competition, political conditions, legislation and regulations, and labor market. Financial risk includes the enterprise’s financial situation, and includes: Market risk, associated with the costs of goods and services, foreign exchange rates and securities (shares, bonds, etc.). Credit risk, associated with a debtor’s failure to meet its obligations in accordance with agreed terms. Liquidity risk, reflecting lack of access to cash; the difficulty of selling an asset in a timely manner. Operational risk is related to conditions affecting the normal operating situation: Accidental events, including failures and defects, quality deviations, natural disasters. Intended acts; sabotage, disgruntled employees, etc. Loss of competence, key personnel. Legal circumstances, associated for instance, with defective contracts and liability insurance.For an enterprise to become successful in its implementation of risk management, top management needs to be involved, and activities must be put into effect on many levels. Some important points to ensure success are: the establishment of a strategy for risk management, i.e., the principles of how the enterprise defines and implements risk management. Should one simply follow the regulatory requirements (minimal requirements), or should one be the “best in the class”? The establishment of a risk management process for the enterprise, i.e. formal processes and routines that the enterprise is to follow. The establishment of management structures, with roles and responsibilities, such that the risk analysis process becomes integrated into the organization. The implementation of analyses and support systems, such as risk analysis tools, recording systems for occurrences of various types of events, etc. The communication, training, and development of a risk management culture, so that the competence, understanding, and motivation level within the organization is enhanced. Given the above fundamentals of risk management, the next step is to develop principles and a methodology that can be used in practical decision-making. This is not, however, straightforward. There are a number of challenges and here we address some of these: establishing an informative risk picture for the various decision alternatives, using this risk picture in a decision-making context. Establishing an informative risk picture means identifying appropriate risk indices and assessments ofuncertainties. Using the risk picture in a decision making context means the definition and application of risk acceptance criteria, cost benefit analyses and the ALARP principle, which states that risk should be reduced to a level which is as low as is reasonably practicable.It is common to define and describe risks in terms of probabilities and expected values. This has, however, been challenged, since the probabilities and expected values can camouflage uncertainties; the assigned probabilities are conditional on a number of assumptions and suppositions, and they depend on the background knowledge. Uncertainties are often hidden in this background knowledge, and restricting attention to the assigned probabilities can camouflage factors that could produce surprising outcomes. By jumping directly into probabilities, important uncertainty aspects are easily truncated, and potential surprises may be left unconsidered.Let us, as an example, consider the risks, seen through the eyes of a risk analyst in the 1970s, associated with future health problems for divers working on offshore petroleum projects. The analyst assigns a value to the probability that a diver would experience health problems (properly defined) during the coming 30 years due to the diving activities. Let us assume that a value of 1 % was assigned, a number based on the knowledge available at that time. There are no strong indications that the divers will experience health problems, but we know today that these probabilities led to poor predictions. Many divers have experienced severe health problems (Avon and Vine, 2007). By restricting risk to the probability assignments alone, important aspects of uncertainty and risk are hidden. There is a lack of understanding about the underlying phenomena, but the probability assignments alone are not able to fully describe this status.Several risk perspectives and definitions have been proposed in line with this realization. For example, Avon (2007a, 2008a) defines risk as the two-dimensional combination of events/consequences and associated uncertainties (will the events occur, what the consequences will be). A closely related perspective is suggested by Avon and Renan (2008a), who define risk associated with an activity as uncertaintyabout and severity of the consequences of the activity, where severity refers to intensity, size, extension, scope and other potential measures of magnitude with respect to something that humans value (lives, the environment, money, etc.). Losses and gains, expressed for example in monetary terms or as the number of fatalities, are ways of defining the severity of the consequences. See also Avon and Christensen (2005).In the case of large uncertainties, risk assessments can support decision-making, but other principles, measures, and instruments are also required, such as the cautionary/precautionary principles as well as robustness and resilience strategies. An informative decision basis is needed, but it should be far more nuanced than can be obtained by a probabilistic analysis alone. This has been stressed by many researchers, e.g. Apostolicism (1990) and Apostolicism and Lemon (2005): qualitative risk analysis (QRA) results are never the sole basis for decision-making. Safety- and security-related decision-making is risk-informed, not risk-based. This conclusion is not, however, justified merely by referring to the need for addressing uncertainties beyond probabilities and expected values. The main issue here is the fact that risks need to be balanced with other concerns.When various solutions and measures are to be compared and a decision is to be made, the analysis and assessments that have been conducted provide a basis for such a decision. In many cases, established design principles and standards provide clear guidance. Compliance with such principles and standards must be among the first reference points when assessing risks. It is common thinking that risk management processes, and especially ALARP processes, require formal guidelines or criteria (e.g., risk acceptance criteria and cost-effectiveness indices) to simplify the decision-making. Care must; however, be shown when using this type of formal decision-making criteria, as they easily result in a mechanization of the decision-making process. Such mechanization is unfortunate because: Decision-making criteria based on risk-related numbers alone (probabilities and expected values) do not capture all the aspects of risk, costs, and benefits, no method has a precision that justifies a mechanical decision based on whether the result is overor below a numerical criterion. It is a managerial responsibility to make decisions under uncertainty, and management should be aware of the relevant risks and uncertainties.Apostolicism and Lemon (2005) adopt a pragmatic approach to risk analysis and risk management, acknowledging the difficulties of determining the probabilities of an attack. Ideally, they would like to implement a risk-informed procedure, based on expected values. However, since such an approach would require the use of probabilities that have not b een “rigorously derived”, they see themselves forced to resort to a more pragmatic approach.This is one possible approach when facing problems of large uncertainties. The risk analyses simply do not provide a sufficiently solid basis for the decision-making process. We argue along the same lines. There is a need for a management review and judgment process. It is necessary to see beyond the computed risk picture in the form of the probabilities and expected values. Traditional quantitative risk analyses fail in this respect. We acknowledge the need for analyzing risk, but question the value added by performing traditional quantitative risk analyses in the case of large uncertainties. The arbitrariness in the numbers produced can be significant, due to the uncertainties in the estimates or as a result of the uncertainty assessments being strongly dependent on the analysts.It should be acknowledged that risk cannot be accurately expressed using probabilities and expected values. A quantitative risk analysis is in many cases better replaced by a more qualitative approach, as shown in the examples above; an approach which may be referred to as a semi-quantitative approach. Quantifying risk using risk indices such as the expected number of fatalities gives an impression that risk can be expressed in a very precise way. However, in most cases, the arbitrariness is large. In a semi-quantitative approach this is acknowledged by providing a more nuanced risk picture, which includes factors that can cause “surprises” r elative to the probabilities and the expected values. Quantification often requires strong simplifications and assumptions and, as a result, important factors could be ignored or given too little (or too much) weight. In a qualitative or semi-quantitative analysis, amore comprehensive risk picture can be established, taking into account underlying factors influencing risk. In contrast to the prevailing use of quantitative risk analyses, the precision level of the risk description is in line with the accuracy of the risk analysis tools. In addition, risk quantification is very resource demanding. One needs to ask whether the resources are used in the best way. We conclude that in many cases more is gained by opening up the way to a broader, more qualitative approach, which allows for considerations beyond the probabilities and expected values.The traditional quantitative risk assessments as seen for example in the nuclear and the oil & gas industries provide a rather narrow risk picture, through calculated probabilities and expected values, and we conclude that this approach should be used with care for problems with large uncertainties. Alternative approaches highlighting the qualitative aspects are more appropriate in such cases. A broad risk description is required. This is also the case in the normative ambiguity situations, as the risk characterizations provide a basis for the risk evaluation processes. The main concern is the value judgments, but they should be supported by solid scientific assessments, showing a broad risk picture. If one tries to demonstrate that it is rational to accept risk, on a scientific basis, too narrow an approach to risk has been adopted. Recognizing uncertainty as a main component of risk is essential to successfully implement risk management, for cases of large uncertainties and normative ambiguity.A risk description should cover computed probabilities and expected values, as well as: Sensitivities showing how the risk indices depend on the background knowledge (assumptions and suppositions); Uncertainty assessments; Description of the background knowledge, including models and data used.The uncertainty assessments should not be restricted to standard probabilistic analysis, as this analysis could hide important uncertainty factors. The search for quantitative, explicit approaches for expressing the uncertainties, even beyond the subjective probabilities, may seem to be a possible way forward. However, such an approach is not recommended. Trying to be precise and to accurately express what is extremely uncertain does not make sense. Instead we recommend a more openqualitative approach to reveal such uncertainties. Some might consider this to be less attractive from a methodological and scientific point of view. Perhaps it is, but it would be more suited for solving the problem at hand, which is about the analysis and management of risk and uncertainties.Source: Terje Aven. 2010. “Risk Management”. Risk in Technological Systems, Oct, p175-198.译文:风险管理本章回顾和讨论风险管理的基本问题和原则,包括:风险可接受性(耐受性)、风险削减和安全风险管理原则、警示和预防原则,并提出了一个研究案例,说明在实际管理环境中这些问题和原则的重要性。
Electronic power steering systemWhat it is:Electrically powered steering uses an electric motor to drive either the power steering hydraulic pump or the steering linkage directly. The power steering function is therefore independent of engine speed, resulting in significant energy savings.How it works:Conventional power steering systems use an engine accessory belt to drive the pump, providing pressurized fluid that operates a piston in the power steering gear or actuator to assist the driver.In electro-hydraulic steering, one electrically powered steering concept uses a high efficiency pump driven by an electric motor. Pump speed is regulated by an electric controller to vary pump pressure and flow, providing steering efforts tailored for different driving situations. The pump can be run at low speed or shut off to provide energy savings during straight ahead driving (which is most of the time in most world markets).Direct electric steering uses an electric motor attached to the steering rack via a gear mechanism (no pump or fluid). A variety of motor types and gear drives is possible. A microprocessor controls steering dynamics and driver effort. Inputs include vehicle speed and steering, wheel torque, angular position and turning rate.Working In Detail:A "steering sensor" is located on the input shaft where it enters the gearbox housing. The steering sensor is actually two sensors in one: a "torque sensor" that converts steering torque input and its direction into voltage signals, and a "rotation sensor" that converts the rotation speed and direction into voltage signals. An "interface" circuit that shares the same housing converts the signals from the torque sensor and rotation sensor into signals the control electronics can process. Inputs from the steering sensor are digested by a microprocessor controlunit that also monitors input from the vehicle's speed sensor. The sensor inputs are then compared to determine how much power assist is required according to a preprogrammed "force map" in the control unit's memory. The control unit then sends out the appropriate command to the "power unit" which then supplies the electric motor with current. The motor pushes the rack to the right or left depending on which way the voltage flows (reversing the current reverses the direction the motor spins). Increasing the current to the motor increases the amount of power assist.The system has three operating modes: a "normal" control mode in which left or right power assist is provided in response to input from the steering torque and rotation sensor's inputs;a "return" control mode which is used to assist steering return after completing a turn; and a "damper" control mode that changes with vehicle speed to improve road feel and dampen kickback.If the steering wheel is turned and held in the full-lock position and steering assist reaches a maximum, the control unit reduces current to the electric motor to prevent an overload situation that might damage the motor. The control unit is also designed to protect the motor against voltage surges from a faulty alternator or charging problem.The electronic steering control unit is capable of self-diagnosing faults by monitoring the system's inputs and outputs, and the driving current of the electric motor. If a problem occurs, the control unit turns the system off by actuating a fail-safe relay in the power unit. This eliminates all power assist, causing the system to revert back to manual steering. A dash EPS warning light is also illuminated to alert the driver. To diagnose the problem, a technician jumps the terminals on the service check connector and reads out the trouble codesElectric power steering systems promise weight reduction, fuel savings and package flexibility, at no cost penalty.Europe's high fuel prices and smaller vehicles make a fertile testbed for electric steering, a technology that promises automakers weight savings and fuel economy gains. And in a short time, electric steering will make it to the U.S., too. "It's just just a matter of time," sa ys Aly Badawy, director of research and development for Delphi Saginaw Steering Systems in Saginaw, Mich. "The issue was cost and that's behind us now. By 2002 here in the U.S. the cost of electric power steering will absolutely be a wash over hydraulic."Today, electric and hybrid-powered vehicles (EV), including Toyota's Prius and GM's EV-1, are the perfect domain for electric steering. But by 2010, a TRW Inc. internal study estimates that one out of every three cars produced in the world will be equipped with some form of electrically-assisted steering. The Cleveland-based supplier claims its new steering systems could improve fuel economy by up to 2 mpg, while enhancing handling. There are true bottom-line benefits as well for automakers by reducing overall costs and decreasing assembly time, since there's no need for pumps, hoses and fluids.Another claimed advantage is shortened development time. For instance, a Delphi groupdeveloped E-TUNE, a ride-and-handling software package that can be run off a laptop computer. "They can take that computer and plug it in, attach it to the controller and change all the handling parameters -- effort level, returnability, damping -- on the fly," Badawy says. "It used to take months." Delphi has one OEM customer that should start low-volume production in '99.Electric steering units are normally placed in one of three positions: column-drive, pinion-drive and rack-drive. Which system will become the norm is still unclear. Short term, OEMs will choose the steering system that is easiest to integrate into an existing platform. Obviously,greater potential comes from designing the system into an all-new platform. "We have all three designs under consideration," says Dr. Herman Strecker, group vice president of steering systems division at ZF in Schwaebisch Gmuend, Germany. "It's up to the market and OEMs which version finally will be used and manufactured." "The large manufacturers have all grabbed hold of what they consider a core technology," explains James Handy sides, TRW vice president, electrically assisted steering in Sterling Heights, Mich. His company offers a portfolio of electric steering systems (hybrid electric, rack-, pinion-, and column-drive). TRW originally concentrated on what it still believes is the purest engineering solution for electric steering--the rack-drive system. The system is sometimes refer to as direct drive or ball/nut drive. Still, this winter TRW hedged its bet, forming a joint venture with LucasVarity. The British supplier received $50 million in exchange for its electric column-drive steering technology and as sets. Initial production of the column and pinion drive electric steering systems is expected to begin in Birmingham, England, in 2000."What we lack is the credibility in the steering market," says Brendan Conner, managing director, TRW/LucasVarity Electric Steering Ltd. "The combination with TRW provides us witha good opportunity for us to bridge that gap." LucasVarity currently has experimental systems on11 different vehicle types, mostly European. TRW is currently supplying its EAS systems for Ford and Chrysler EVs in North America and for GM's new Opel Astra.In 1995, according to Delphi, traditional hydraulic power steering systems were on 7596 of all vehicles sold globally. That 37-million vehicle pool consumes about 10 million gallons in hydraulic fluid that could be superfluous, if electric steering really takes off.The present invention relates to an electrically powered drive mechamsm for providing powered assistance to a vehicle steering mechanism. According to one aspect of the present invention, there is provided an electrically powered driven mechanism for providing powered assistance to a vehicle steering mechanism having a manually rotatable member for operating the steering mechanism, the drive mechanism including a torque sensor operable to sense torque being manually applied to the rotatable member, an electrically powered drive motor drivingly connected to the rotatable member and a controller which is arranged to control the speed and direction of rotation of the drive motor in response to signals received from the torque sensor, the torque sensor including a sensor shaft adapted for connection to the rotatable member to form anextension thereof so that torque is transmitted through said sensor shaft when the rotatable member is manually rotated and a strain gauge mounted on the sensor shaft for producing a signal indicative of the amount of torque being transmitted through said shaft. Preferably the sensor shaft is non-rotatably mounted at one axial end in a first coupling member and is non-rotatably mounted at its opposite axial end in a second coupling member, the first and second coupling members being inter-engaged to permit limited rotation there between so that torque under a predetermined limit is transmitted by the sensor shaft only and so that torque above said predetermined limit is transmitted through the first and second coupling members. The first and second coupling members are preferably arranged to act as a bridge for drivingly connecting first and second portions of the rotating member to one another. Preferably the sensor shaft is of generally rectangular cross-section throughout the majority of its length. Preferably the strain gauge includes one or more SAW resonators secured to the sensor shaft. Preferably the motor is drivingly connected to the rotatable member via a clutch .Preferably the motor includes a gear box and is concentrically arranged relative to the rotatable member. Various aspects of the present invention will hereafter be described, with reference to the accompanying drawings, in which :Figure 1 is a diagrammatic view of a vehicle steering mechanism including an electrically powered drive mechanism according to the present invention, Figure 2 is a flow diagram illustrating interaction between various components of the drive mechanism shown in Figure 1 ,Figure 3 is an axial section through the drive mechanism shown in Figure 1, Figure 4 is a sectional view taken along lines IV-IV in Figure 3,Figure 5 is a more detailed exploded view of the input drives coupling shown in Figure 3, and Figure 6 is a more detailed exploded view of the clutch showing in Figure 3. Referring initially to Figure 1 , there is shown a vehicle steering mechanism 10 drivingly connected to a pair of steerable road wheels The steering mechanism 10 shown includes a rack and pinion assembly 14 connected to the road whee ls 12 via joints 15. The pinion(not shown) of assembly 14 is rotatably driven by a manually rotatable member in the form of a steering column 18 which is manually rotated by a steering wheel 19.The steering column 18 includes an electric powered drive mechanism 30 which includes an electric drive motor (not shown in Figure 1) for driving the pinion in response to torque loadings in the steering column 18 in order to provide power assistance for the operative when rotating the steering wheel 19.As schematically illustrated in Figure 2, the electric powered drive mechanism includes a torque sensor20 which measures the torque applied by the steering column 18 when driving the pinion and supplies a signal to a controller 40. The controller 40 is connected to a drive motor 50 and controls the electric current supplied to the motor 50 to control the amount of torque generated by the motor 50 and the direction of its rotation. The motor 50 is drivingly connected to the steering column 18 preferably via a gear box 60, preferably an epicyclic gear box, and a clutch 70. The clutch 70 is preferably permanently engaged during normal operation and is operative under certain conditions to isolate drive from the motor 50 to enable the pinionto be driven manually through the drive mechanism 30. This is a safety feature to enable the mechanism to function in the event of the motor 50 attempting to drive the steering column too fast and/or in the wrong direction or in the case where themotor and/or gear box have seized.The torque sensor 20 is preferably an assembly including a short sensor shaft on which is mounted a strain gauge capable of accurately measuring strain in the sensor shaft brought about by the application of torque within a predetermined range. Preferably the predetermined range of torque which is measured is 0-lONm; more preferably is about l-5Nm.Preferably the range of measured torque corresponds to about 0-1000 microstrain and the construction of the sensor shaft is chosen such that a torque of 5Nm will result in a twist of less than 2°in the shaft, more preferably less than 1 °.Preferably the strain gauge is a SAW resonator, a suitable SAW resonator being described in WO91/13832. Preferably a configuration similar to that shown in Figure 3 of WO91/13832 is utilised wherein two SAW resonators are arranged at 45° to the shaft axis and at 90° to one another. Preferably the resonators operate with a resonance frequency of between 200-400 MHz and are arranged to produce a signal to the controller 40 of 1 MHz ± 500 KHz depending upon the direction of rotation of the sensor shaft. Thus, when the sensor shaft is not being twisted due to the absence of torque, it produces a 1 MHz signal. When the sensor shaft is twisted in one direction it produces a signal between 1.0 to 1.5 MHz. When the sensor shaft is twisted in the opposite direction it produces a signal between 1.0 to 0.5 MHz. Thus the same sensor is able to produce a signal indicative of the degree of torque and also the direction of rotation of the sensor shaft. Preferably the amount of torque generated by the motor in response to a measured torque of between 0-10Nm is 0-40Nm and for a measured torque of between l-5Nm is 0-25Nm.Preferably a feed back circuit is provided whereby the electric current being used by the motor is measured and compared by the controller 40 to ensure that the motor is running in the correct direction and providing the desired amount of power assistance. Preferably the controller acts to reduce the measured torque to zero and so controls the motor to increase its torque output to reduce the measured torque. A vehicle speed sensor (not shown) is preferably provided which sends a signal indicative of vehicle speed to the controller. The controller uses this signal to modify the degree of power assistance provided in response to the measured torque. Thus at low vehicle speeds maximum power assistance will be provided and a high vehicle speeds minimum power assistance will be provided。