Evaluating the impact of simultaneous multithreading on network servers using real hardware
- 格式:pdf
- 大小:493.00 KB
- 文档页数:12
Autodesk®Moldflow® Insight Plastics made perfect.1Autodesk® Moldflow® Insight software, part ofthe Autodesk® solution for Digital Prototyping,provides injection molding simulation tools foruse on digital prototypes. Providing in-depthvalidation and optimization of plastic parts andassociated injection molds, Autodesk MoldflowInsight software helps study the injection moldingprocesses in use today. Used by some of thetop manufacturers in the automotive, consumerelectronics, medical, and packaging industries,Autodesk Moldflow Insight software helps toreduce the need for costly mold rework andphysical prototypes, minimize delays associatedwith removing molds from production, and getinnovative products to market faster.Autodesk Moldflow Insight Product LineAutodesk is dedicated to providing a wide rangeof injection molding simulation tools to help CAEanalysts, designers, engineers, mold makers, andmolding professionals create accurate digitalprototypes and bring better products to marketat less cost.Validation and Optimization of Plastic PartsWith the use of plastic parts on the rise in almost every industry, and the pressure to reduce costs and cut time to market, the need for simulation tools that provide deep insight into the plastic injection molding process has neverbeen greater.12Hot Runner SystemsModel hot runner system components and set up sequential valve gates to help eliminate weld lines and control the packing phase.Plastic Flow SimulationSimulate the flow of melted plastic to help optimize part and mold designs, reduce potential part defects, and improve the molding process.Part DefectsDetermine potential part defects, such as weld lines, air traps, and sink marks, and then rework designs to help avoid these problems.Thermoplastic FillingSimulate the filling phase of the thermoplasticinjection molding process to help predict the flow of melted plastic and fill mold cavities uniformly; avoid short shots; and eliminate, minimize, or reposition weld lines and air traps.Thermoplastic PackingOptimize packing profiles and visualize magnitude and distribution of volumetric shrinkage to help minimize part warpage and reduce defects, such as sink marks.Validate and optimize plastic parts, injection molds, and theinjection molding process.Feed System SimulationModel and optimize hot and cold runner systems and gating configurations. Improve part surfaces, minimize part warpage, and reduce cycle times.Gate LocationIdentify up to 10 gate locations simultaneously. Minimize injection pressure and exclude specific areas when determining gate location.Runner Design WizardCreate feed systems based on inputs for layout, size, and type of components, such as sprue, runners, and gates.Balancing RunnersBalance runner systems of single-cavity, multicavity, and family mold layouts so parts fill simultaneously,reducing stress levels and volume of material.Mold Cooling SimulationImprove cooling system efficiency, minimize part warpage, achieve smooth surfaces, and reduce cycle times.Cooling Component ModelingAnalyze the mold’s cooling system efficiency. Model cooling circuits, baffles, bubblers, and mold inserts and bases.Cooling System AnalysisOptimize mold and cooling circuit designs to help achieve uniform part cooling, minimize cycle times, reduce part warpage, and decrease manufacturing costs.WarpagePredict warpage resulting from process-inducedstresses. Identify where warpage might occurand optimize part and mold design, materialchoice, and processing parameters to help controlpart deformation.Core Shift ControlMinimize the movement of mold cores by determin-ing ideal processing conditions for injectionpressure, packing profile, and gate locations.Fiber OrientationControl fiber orientation within plastics to helpreduce part shrinkage and warpage across themolded part.CAE Data ExchangeValidate and optimize plastic part designs usingtools to exchange data with structural simulationsoftware. CAE data exchange is available withAutodesk® Algor® Simulation, ANSYS®, andAbaqus® structural simulation software to accountfor the effects of processing on the performanceof fiber-filled, injection-molded plastic parts whensubjected to service loading.Rapid Heat Cycle MoldingSet up variable mold surface temperature profilesto both maintain warmer temperatures duringfilling to achieve smooth surfaces, and also reducetemperatures in the packing and cooling phases tohelp freeze parts and decrease cycle times.Shrinkage & Warpage SimulationEvaluate part and mold designs to help controlshrinkage and warpage.ShrinkageMeet part tolerances by predicting part shrinkagebased on processing parameters and grade-specificmaterial data.34Thermoset Flow SimulationSimulate thermoset injection molding, RIM/SRIM, resin transfer molding, and rubber compound injection molding.Reactive Injection MoldingPredict how molds will fill with or without fiber-reinforced pre-forms. Help avoid short shots due to pre-gelation of resin, and identify air traps and problematic weld lines. Balance runner systems, select molding machine size, and evaluate thermoset materials.Microchip EncapsulationSimulate encapsulation of semiconductor chips with reactive resins and the interconnectivity of electrical chips. Predict bonding wire deformation within the cavity and shifting of the lead frame due to pressure imbalances.Underfill EncapsulationSimulate flip-chip encapsulation to predictmaterial flow in the cavity between the chip andthe substrate.Leading-Edge Simulation ToolsUse leading-edge simulation tools to solve design challenges.Insert OvermoldingRun an insert overmolding simulation to helpdetermine the impact of mold inserts on melt flow, cooling rate, and part warpage.Two-Shot Sequential OvermoldingSimulate the two-shot sequential overmolding process: one part is filled; the tool opens and indexes to a new position; and a second part is molded over the first.BirefringencePredict optical performance of an injection-molded part by evaluating refractive index changes that result from process-induced stresses. Evaluate multiple materials, processing conditions, and gate and runner designs to help controlbirefringence in the part.Specialized Molding ProcessesSimulate a wide range of plastic molding processes and state-of-the-art process applications.Gas-Assisted Injection MoldingDetermine where to position polymer and gas entrances, how much plastic to inject prior to gas injection, and how to optimize size and placement of gas channels.Co-Injection MoldingVisualize the advancement of skin and core materials in the cavity and view the dynamic relationship between the two materials as filling progresses. Optimize material combinations while maximizing the product's cost-performance ratio.Injection-Compression MoldingSimulate simultaneous or sequential polymer injection and mold compression. Evaluate material candidates, part and mold design,and processing conditions.5CAD Interoperability and MeshingUse tools for native CAD model translation and optimization. Get geometry support for thin-walled parts and thick and solid applications. Select mesh type based on desiredsimulation accuracy and solution time.CAD Solid ModelsImport and mesh solid geometry from Parasolid ®-based CAD systems, Autodesk ® Inventor ® software, CATIA ® V5, Pro/ENGINEER ®, and SolidWorks ®, as well as IGES and STEP universal files.Error Checking and RepairScan imported geometry and automatically fix defects that can occur when translating the model from CAD software.Centerline Import/ExportImport and export feed system and coolingchannel centerlines from and to CAD software to help decrease modeling time and avoid runner and cooling channel modeling errors.Autodesk ® Moldflow ® CAD DoctorCheck, correct, heal, and simplify solid models imported from 3D CAD systems to prepare for simulation.3D SimulationsPerform 3D simulations on complex geometry using a solid, tetrahedral, finite element mesh technique. Ideal for electrical connectors, thick structural components, and geometries with thickness variations.Dual Domain TechnologySimulate solid models of thin-walled parts using Dual Domain™ technology. Work directly from 3D solid CAD models, leading to easier analysis of design iterations.Midplane MeshesGenerate 2D planar surface meshes with assignedthicknesses for thin-walled parts.Results Interpretation & PresentationUse a wide range of tools for model visualization, results evaluation, and presentation.Automatic Reporting ToolsUse the Report Generation Wizard to create web-based reports. Prepare and share simulation results more quickly and easily with customers, vendors, and team members.Microsoft® OfficeExport results and images for use in Microsoft®Word reports and PowerPoint® presentations. Autodesk® Moldflow® Communicator Collaborate with manufacturing personnel, procurement engineers, suppliers, and external customers using Autodesk® Moldflow®Communicator software. The Autodesk Moldflow Communicator results viewer enables you to export results from Autodesk Moldflow software so stakeholders can more easily visualize, quantify, and compare simulation results.Material DataImprove simulation accuracy with precisematerial data.Materials DatabaseUse the built-in materials database of grade-specific information on more than 8,000 plasticmaterials characterized for use in plastic injectionmolding simulation.Autodesk® Moldflow® Plastics LabsGet state-of-the-art plastic material testingservices, expert data-fitting services, and extensivematerial databases.Productivity ToolsUse extensive help to boost productivity.HelpGet help on a results plot, including informationon what to look for and how to correct typicalproblems. Learn more about solver theory,interpreting simulation results, and designingbetter plastic parts and injection molds.Automation and CustomizationAutomate common tasks and customize AutodeskMoldflow software for your organization.API ToolsApplication programming interface (API) toolsexpand the functionality of Autodesk Moldflowsoftware by enabling you to automate commontasks, customize the user interface, work with third-party applications, and help implement corporatestandards and best practices.WorkspacesCustomize the user interface and applicationfeatures for your team. Set up profiles to guide newusers through the simulation process and identifycommon problems. Define other profiles to giveadditional functionality and flexibility to moreexperienced users. Results Evaluation and Productivity ToolsVisualize and evaluate simulation results, and use the automatic reporting tools to share the results with stakeholders. Take advantage of features such as a materials database and customizable workspaces to further boostproductivity.6。
外文翻译原文Pollution haven hypothesis and Environmental impacts of foreign direct Investment: The Case of Industrial Emission of Sulfur Dioxide (SO2) in Chinese provincesMaterial Source: CERDI,Etudes et Documents,Ec2005,06 Author:Jie HEAbstractRecognizing the complex inter-correlation between FDI emission and the three economic determinants of emission, we constructed a simultaneous model to study the FDI-emission nexus in China by exploring both the dynamic recursive FDI entry decision and the linkage from FDI entry to final emission results under the intermediation of the scale, composition and technique effects. The model is then estimated on the panel data of China’s 29 provinces’ industrial SO2 emission. Result shows that, exerting through different channels; the total impact of FDI on industrial SO2 emission is very small. With 1% increase in FDI capital stock, industrial SO2 emission will increase by 0.099%,in which the emission increase caused by impact of FDI’s role in reinforcement of environment al regulation. By introducing to the simultaneous system the recursive dynamism that supposes FDI entry decision to depend on last period’s economic growth and environmental regulation stringency, our model also provides convincing supportive evidences for ‘Pollution haven’ hypothesis. Although FDI enterprises in China generally produce with higher pollution efficiency ,the rise in environmental regulation stringency still has modest deterrent effect on FDI capital inflow. Furthermore, the composition transformation impact of FDI in China seems to be dominated by the inflow of foreign capital pursing a ‘production platfom’ that provides lower pollution regulation compliance cost.Kye words: foreign direct investment, industrial SO2 emission, simultaneous, scale effect, composition effect, income effect, and pollution haven hypothesis Pollution haven hypothesis and Environmental impacts of foreign direct investment: The Case of Industrial Emission of Sulfur Dioxide (SO2) in ChineseprovincesIntroductionThe market-oriented economic reform has gradually turned China into one of the most attractive destinations for foreign direct investment (FDI) around the world. During most part of 1990s, China was the world second largest FDI recipient just behind the United States and the largest FDI recipient in the developing world. After entering into the new millennium, contrary to the decreasing tendency of FDI inflows in many OECD economics due to their sluggish macroeconomic performance, China experienced a steady FDI inflow increase. According to OECD(2004), China became world biggest FDI recipient in 2003 with an annual inflow of FDI amount to about 53.3 billions US dollars, largely higher than to Germany(47 billions USD) and to the UnitedState(40 billions USD) at the same year. During 25 years of economic reform,China has received in total almost 500billions USD of foreign direct investment.(SSB,2004) From the evolution of annual FDI actually utilized in China since 1978 illustrated in Figure 1,we observe a generally increasing trajectory with the most important rise happened after 1993. the last several years. OECD(2004)indicated that at the beginning years of China’s economic reform, FDI choosing China as destination aimed at integrating its cheap labor resource into their global production chain, but recently, there is an increasing tendency for the foreign companies to invest in China as part of their strategies to service the local clients or to acquire a strategic position in China’s enormous market.(Insert Figure 1 about here)However China’s remarkable openness process during the last 25 years seemed to be accompanied by obvious environmental pollution problems. Air pollution situation in the urban area started deteriorating quickly since the first decade of ec onomic reform in 1980’s. Although some improvement came up during 1990’s owing to the reinforcement of pollution control policies,2/3 of Chinese cities still fail to meet the air quality standard established by China’s Environment Protection Agency (EPA), which signifies that more than 3/4 of the urban population are exposed in seriously polluted air. What is the possible relationship between the rapid FDI inflow and the air pollution deterioration? Should the in pollution deterioration? Should the inflow o f FDI be responsible for China’s air pollution situation?Aiming at obtaining a better understanding on the FDI-environment nexus, this paper constructs a five-equation simultaneous system to include both the FDIlocation decision with respect to host coun try’s environmental regulation stringency and the impact of FDI on pollution through various underlying simultaneous mechanisms. This simultaneous system is then tested by the penal data of industrial sulfur dioxide (SO2) emission of the 29 Chinese province during the period 1994-2001, during which FDI inflow experienced the most important increase. The time-constant specific effect for each province is captured by fixed effect parameters .To correct potential first-order serial correlation and heteroskedasticity in each estimation function, an instrumentation method inspired by both the GMM-system estimator of Blundell and Bond (1998) and Sevestre and Trognon(1996) for dyanamic panel data is used on equation-level. Finally, to employ the full information imparted from the simultaneous system and to avoid inconsistency in estimation caused by the inter-equation residual correlation, we used Generalized method of Moment(GMM) estimator for simultaneous system to estimate the whole system.The organization of the paper is the following. In the second section, we make a brief literature review to explain the necessity to investigate the relationship between FDI and environment through a structural simultaneous system by revealing the complexity in the FDI-pollution nexus. Section 3 gives a simple introduction the simultaneous model in the fourth section. The econometric results are presented and discussed in Section 5. Finally, we conclude in Section 6.2. FDI-pollution nexus literature review Most of the existing literaturesdid not directly treat the FDI-pollution nexus but basedtheir analyses on the causality from environmental regulation stringency to firm's competitive mess as entry point. They supposed under globalization circumstance, the relatively lax environmental regulation in the den-eloping countries becomes an attractive comparative advantage to the pollution-intensive foreign capital seeking for a `pollution-hay-en' to avoid paying costly pollution control compliance expenditure domestically.' Though this `pollution ha}Ten' hypothesis sounds reasonable, almost no empirical analysis has yet provided convincing supportive evidences rep-Baling FDI's searching activity for the `production platforms' permitting lower pollution abatement cost.' Besides the potential explanation residing in measurement problems for both environmental regulation stringency and FDI flows, most of authors attributed the incapability in detecting a significant regulation-FDI flows nexus to the complexity of the relationship between them. Firstly, compared to the classical determinant factors in FDI location decision,as the cony-entional production factor cost, tax rate differential, the host country's market size, exchange rate risk, trade impediments and market power, etc., environmental regulation compliance cost is not a critical cost factor for most of pri}Tate firms. Dasgupta, Wang and Wheeler (1997) find the control cost for sulfur dioxide pollution in large-scale Chinese industrial enterprises is just a few dollars per ton until the control rate rise abo}Te to 70%. Various studies based on developed country's firm-lei-el data also found the total factor productivity decline caused by reinforced environmental regulations generally stays modest. enison, 1979, Gray, 1987, Hay-eman and Christiansen, 1981, etc) This suggests that pollution control cost differential does not provide OECD firms with strong incentive to mop-a offshore. affe et al., 1995) Secondly, different from `pollution hay-en' hypothesis 一one classical economic reasoning based on the analogy of traditional static comparative advantage perspective, hypothesis of Porter asserts that from a dynamic point of,environmental regulation stringency can encourage efficiency innovanon and guide production procedure to be more environment-friendly (Porter and Linde, 1995; epapadeas and Zeeuw, 1999). This dynamic technical progress can further induce a `negative cost', which will benefit productivity reinforcement owing to cleaned environment.Jaffe, 1995) Following this point of view, firm's `technology profiting' actinides catalyzed by reinforcement of environmental control policy will be able to cancel off the differential in pollution abatement cost between countries, capital flight due to this differential is actually unnecessary in a long-run. Thirdly, the insignificance in using environmental regulation to explain capital flow might also due to the potentially rep-ersed causality between these two phenomena. On one hand, for a developed economy,the `racing-to-the-bottom' hypothesis emphasizes the possibility that the profit-drip-en capital outflow pursuing the lowest production cost might create pressures on the government to lower their environmental standard (Re}Tesz, 199?). On the other hand, several `pollution hay-en' studies based on the historical experiences of den-eloping countries showed that as income increases with FDI inflow, the environmental regulation, strongly correlated with income lei-el, will also increase with FDI inflow, therefore the "pollution-hay-en" should only be a transient phenomenon. (}lani and }}-'heeler, 1997) Gig-en these two aspects, the cost gap in emission abatement between den-eloped and den-eloping countries should hay-a the tendency to decrease with the inter-country mop-ements of FDI. Finally, most of the "pollution hay-en" studies used the total pollution abatementcost as an approximation for the environmental regulation stringency. Howe}-er, to some extent, this indicator can also be regarded as a measurement for the total technical efforts of the host economy on pollution abatement, in which we should not ignore the contribution from the technically more efficient FDI firms. Going a step further, e}-en can prop-a the causality from en}-ironmental regulation stringency to dirty FDI inflow den-eloping countries; this does not immediately mean pollution will increase in host country. }1s found in some studies (Esheland and Harrison, 1997; etc.), the FDI enterprises pollution-intensive industries generally employ production and abatement specialized in technologies more environment-friendly than their domestic competitors in host den-eloping countries. This might be due to the fact that hea}-y emission may signals to the ins-estors that the FDI firms' production techniques are inefficient and hence reduces their expectation on the liability of these multinational corporations asgupta, Laplante and }lamingi, 1997); or simply because in}Testing in the de}Teloping countries is the global-scale production arrangement strategy of the multination enterprises, the adaptation of production technology to the local en}-ironmental standard is not necessary. If these FDI corporations replace the relatively less efficient domestic firms in the same production, we can expect a decline tendency in total pollution of the den-eloping host country. }loreo}-er, the presence of FDI enterprise may also reinforce competition and urge domestic firms to enhance research and development actin-ity and to increase their production efficiency, which will in the long run, strengthen the technical efficiency of the whole host economy.The FDI-pollution nexus is e}-en more complicated if we relate our theoretical consideration to the often-mentioned three economic characteristics. They are economic growth (scale effect), industrial composition (composition effect) and en}-ironmental regulation stringency (technique effect), defined in Grossman (1995) as the three economic determinants of emission from production actin-ides. On the first }-iew, FDI entry is a decision partially depending on the en}-ironmental regulation stringency (technique effect) and the economic scale (scale effectl of the host country. }1t the same time- the structural linha}e between FDI entry and final emission results is also built on their intermediations. Once foreign capital enters the host country, it can in turn exert influences on all the three characteristics of the economy. For the case of China, firstly, FDI entry can accelerate economic growth, either through productivityreinforcement伍i et al, 2001; Chen and Demurgey, 2002 and Liu and Wang, 2003), or through technology diffusion汀hompson, 2002; Cheung and Lin, 2004 and Lemoine and Unal-Kesenci, 2004), or through scale economy de}Telopment (Tuan and Ng, 2004). Secondly, although the theories that predict the pattern of trade does not focus on ownership, gig-en the similarity between the FDI location decision and trade specialization, most of the factors used in traditional theories to predict one country's trade patterns can be used to explain the composition impact of FDI. On one hand, `pollution-hay-en' hypothesis suggests China's relati}-ely lax environmental regulation attracts the inflow of polluting foreign capital, which will in turn increases proportion of polluting sectors in industrial composition. On the other hand, gig-en China's endowment in cheap labor force, traditional comparative advantage theory expects that some polluting labor-intensi}-a industries may also experience expansion with the inflow of FDI. Copeland and Taylor (1994,1997) and interweiler et al. (2001) combined these two aspects together and predicted the final composition transformation incurred by international trade depends on the force-contrast between these two comparative adsvantages in the host economy. The same conclusion is also }-alid for the case of FDI. Thirdly, FDI entry can also facilitate environmental regulation reinforcement, either by its direct contribution in pollution abatement capacity accumulation in host economy or indirectly by its income-growth impact that in turn reinforces public exigency for better environment. Finally, FDI-led }-ariations in all of the three emission determinants can further lead final emission result to }-ary and to affect the future FDIentry decision. Given these several aspects' consideration, Letchumanan and Kodama (2000) indicated the relationship between FDI and en}-ironment cannot be adequately understood by simply analyzing measurement of FDI flow in relation to environmental condition. }}'e also need to consider the simultaneously occurring trends and underlying mechanisms that going through the changes in economic scale, industrial composition and technique effect.3. Industrial SOZ emission and foreign direct investment situation in Chinese provincesThe regional disparity in the aspects of openness degree, economic growth and environmental situation between Chinese prop-inces became more and more remarkable during bthe last 25 years economic reform. Figure 2 shows the detailed regional distribution of industrial SO2 emission, accumulated FDI capital stock, economic growth and environmental regulation situation in year 2001.5 Clearly, the rapid economic growth catalyzed byintensified FDI inflowdoes not benefit the 30 prop-inces in the homogenous way. The high ratio of FDI capital stock toGDP is remarkably concentrated in the richer eastern coastal prop-ince.While both the FDI capital stock and per capita GDP shows obi-ious decreasing tendency when we mop-a fromeastern coastal to western inland prop-inces, SOS emission does not follow the same geographicaldistribution pattern. The serious SOS emission problem seems to appear more frequently in thecentral northern prop-inces that had long tradition in hea+ industrial production and somesouthern prop-ince as Guizhou, where the coal endowment contains high concentration of sulfur.Another reason to explain the serious SOS pollution problem is the lax en}Tironmental regulationapplied in some prop-inces, such as Heilongjiang, Shangdong, Fujiang andQinghai, where weobserva the co-existence of low a}-erage SOS 1e+ rate and high per capita SOS emission.Figure 3 further studies the correlation between economic growth, FDI stock, environmental regulation stringency and industrial SOS emission situation by plotting them bypair in same diagram. Except for the kind of ins-erted-U quadratic correlation between economicgrowth and FDI stock, concerning to the other three pair of correlation, we can not deri}-a theirclear correlation directions gig-en the low significance in the estimation coefficients. Obi-iously,The relationship between FDI and emission is more complicated than a simple positi}-a ornegati}-a correlation. nsert Figure 3 about here)4. The links between FDI and emission: The system of simultaneous equationsConsidering the shortcomings of the existing empirical studies on the FDI-en}-ironmentlinkage mentioned abo}-e, the basic idea of this paper is to study the relationship between FDIand final industrial SOS emission in China by exploring both the relationship between environmental regulation stringency and FDI entry decision and the linkage from FDI entry to the final emission result by a structural framework.1 direct inspiration of the system constructed in this paper comes from Dean(1998). Inher paper she studied the relationship between international trade and industrial wastewateremission in China by a simpler simultaneous system. Her model supposes that international tradeincreases pollution through "pollution hay-en" effect, but trade also contributes to economygrowth, which in turn reduces emission since higher income strenghens public exigency for abetter environment. Following the same reasoning, we suppose the relationship between FDI and industrial SO2 emission can be described by the following 5-equation simultaneous model. Equation (1) describes the Economic determinants of emissionFollowing Grossman(1995), we include scale effectcomposition effect .fZ and technique effect 2 in to thisequation. Other things kept unchanged, an economy with larger production scale emits morepollution, so we expect a positive coefficient for this term, which means>.Compositioneffect .fZ reflects pollution performance of an economy's industrial composition. Given thesame production scale, the industrial composition contains higher percentage of polluting sectorsemits more pollution. Therefore, we anticipate a positive coefficient for composition effect。
爱因斯坦的简介英文阿尔伯特·爱因斯坦,犹太裔物理学家,被公认为是继伽利略、牛顿以来最伟大的物理学家,下面是店铺为你整理的爱因斯坦的简介英文,希望对你有用!阿尔伯特·爱因斯坦简介Albert Einstein (Albert.Einstein, March 14, 1879 - April 18, 1955), Jewish physicist.Einstein was born in 1891 in Ulm, Germany, a Jewish family (parents are Jews), 1900 graduated from the Zurich Federal Institute of Technology, into the Swiss nationality. In 1905, by the University of Zurich Ph.D., Einstein proposed photon hypothesis, successfully explained the photoelectric effect, so won the 1921 Nobel Prize in Physics, the creation of special relativity. In 1915 founded the general theory of relativity.Einstein laid the theoretical foundation for nuclear energy development, creating a new era of modern science and technology, is recognized as Galileo, Newton since the greatest physicist. December 26, 1999, Einstein was the United States, "Time magazine" named "century great man".阿尔伯特·爱因斯坦主要成就relativityThe creation of special relativity:As early as the age of 16, Einstein learned from the book that the light is moving rapidly with the speed of electromagnetic waves, linked to this, he would like to explore the so-called ether with the light of the problem. The name of the ether comes from Greece, to represent the basic elements that make up the objects of heaven. The 17th century Descartes and the subsequent Christian Huygens pioneered and developed the theory of ether,that the ether is the medium of light propagation, it is full of space, including vacuum, and can penetrate into the material. Unlike ether, Newton made the light particles say. Newton believes that the luminous body is emitted by a linear motion of particles of particles flow, particle flow impact of the retina caused vision. 18th century Newton's particles said the upper hand, the 19th century, it is volatile that accounted for an absolute advantage. The theory of ether is also greatly developed: the wave of the need for media transmission, light in the vacuum is the transmission of the media, also known as light ether. At the same time, the electromagnetism has been flourishing, through the Maxwell, Hertz and others efforts to form a mature electromagnetic phenomenon dynamics theory - electrodynamics, and from the theory and practice to prove that the light is a certain frequency range of electromagnetic waves , Thus unifying the wave theory of light and electromagnetic theory. The ether is not only the carrier of the light, but also the carrier of the electromagnetic field. Until the end of the nineteenth century, people tried to find the ether, but never found in the experiment ether, on the contrary, Michelson Morey experiment found that the ether is unlikely to exist.The development of electromagnetism was originally incorporated into the framework of Newtonian mechanics, but in interpreting the electromagnetic process of moving objects it was found that the relativity principle followed by Newtonian mechanics was inconsistent. According to Maxwell's theory, the velocity of electromagnetic waves in vacuum, that is, the speed of light is constant; however, according to Newton's method of velocity addition, the speed of light of different inertial lines is different. For example, two cars, one approaching to you, oneaway. You see the lights of the front car close to you, after a car light away. According to Galileo theory, the car coming to you will emit light with a speed greater than c (vacuum light velocity 3.0x10 ^ 8m / s), that is, the speed of the light in front of the vehicle = speed of light + speed; and the speed of light from the vehicle is less than The speed of the rear car light = speed of light - speed. But according to the speed of the two light the same, because in Maxwell's theory, the speed of the car does not affect the spread of light, that white regardless of how the car, the speed of light is equal to c. Maxwell and Galileo on the speed of the argument is clearly contrary!Einstein seems to be the one who will build a new physics building. Einstein carefully studied Maxwell's theory of electromagnetism, especially through the development and elaboration of electrodynamics by Hertz and Lorentz. Einstein firmly believes that the electromagnetic theory is completely correct, but there is a problem that makes him uneasy, this is the absolute reference to the presence of the ether. He read many of the works found that everyone tried to prove that the existence of the ether test is a failure. After studying Einstein found that, in addition to the absolute reference system and the electromagnetic field of the load, the ether in the Lorentz theory has no practical significance.Einstein loves to read philosophical works and absorb thought and nutrition from philosophy, and he believes in the unity and logical coherence of the world. The suspicion of the general validity of the causal law at David Hume in the Olympia Academy of Sciences has had an impact on Einstein. The principle of relativity has been widely proved in mechanics, but can not be established in electrodynamics, for the two theoretical systemsof physics in the logical inconsistency, Einstein raised doubts. In his view, the principle of relativity should be generally established, so the electromagnetic theory for each inertial system should have the same form, but here there is the problem of speed of light. Whether the speed of light is constant or variable, becomes the primary problem of whether the principle of relativity is universally established. At that time the physicists generally believed in the ether, that is, believe that there is an absolute reference system, which is affected by the concept of Newton's absolute space. At the end of the nineteenth century, Mach in the "development of mechanics", criticized the Newton's absolute view of time and space, which gave Einstein left a deep impression. One day in May 1905, Einstein and a friend Besso discussed this issue has been explored for ten years, Bezuo in accordance with the Marxist point of view to elaborate their views, the two discussed for a long time. Suddenly, Einstein realized what, back home after repeated thinking, and finally want to understand the problem. The next day, he came to the Peso family, said: Thank you, my problem solved. The original Einstein wanted to clear one thing: there is no absolute definition of time, time and the speed of the optical signal has an inseparable link. He found the key to unlock, after five weeks of hard work, Einstein to the special theory of relativity in front of people.June 30, 1905, the German "Yearbook of Physics" accepted the Einstein's paper "on the dynamic of electromagnetism", published in the same year in September issue. This paper is the first article on the theory of special relativity, which contains the basic idea and basic content of the special theory of relativity. The special theory of relativity is based on two principles: theprinciple of relativity and the principle of constant speed of light. Einstein's solution to the problem is that he believes in the principle of relativity. Galileo first clarified the idea of relativity, but he did not give a clear definition of time and space. Newton also established the relativity of the mechanical system, but defined the absolute space, absolute time and absolute movement, in this issue he is contradictory. And Einstein greatly developed the principle of relativity, in his view, there is no absolute static space, the same does not exist absolutely the same time, all the time and space are associated with the movement of the object together. For any frame of reference and coordinate system, there is only space and time belonging to this frame of reference and coordinate system.For all the inertial system, the use of the reference system of space and time to express the physical laws, their form is the same, this is the principle of relativity, strictly speaking, the principle of relativity in the narrow sense. In this article, Einstein did not discuss the constant speed of light as a basis for the basic principle, he proposed the same speed of light is a bold assumption, from the electromagnetic theory and the principle of relativity requirements put forward. This article is the result of Einstein's thinking about the issue of etherics and electrodynamics over the years. He has established a new time and space theory from the point of view of the relativity of the same time, and on the basis of the new space-time theory Electromynamics in a complete form, the ether is no longer necessary, the ether raft is not there.What is the reciprocity of the simultaneous? The two events in different places How can we know that it is happening at the same time? In general, we will confirm by signal. In order to knowthe simultaneity of the off-site events we have to know the speed of signal transmission, but how to measure this speed? We must measure the space distance between the two places and the time required for signal transmission, space distance measurement is very simple, The trouble is to measure the time, we must assume that each of the two has a good clock, from the two clocks of the reading can know the time of signal transmission. But how do we know the clocks in different places? The answer is that there is a need for a signal. If the signal in accordance with the previous ideas, it needs a new signal, so infinite back, off-site at the same time can not actually confirm. But one thing is clear, at the same time will be associated with a signal, or we say that these two things happen at the same time is meaningless.The optical signal may be the most appropriate signal for the clock, but the speed of light is not infinite, thus producing a novel conclusion, for the stationary observer at the same time two things, for the movement of the observer is not the same time. We envision a high-speed train that runs at speeds close to the speed of light. When the train passes through the platform, A stands on the platform, two lightning strokes in front of the eye, one at the front of the train, one at the back, and at the ends of the train and the corresponding parts of the platform, The distance between the two ends of the train is equal, the conclusion is that A is also seen two lightning. So for A, the two received optical signals propagate the same distance at the same time interval and arrive at the same time, both of which must occur at the same time, and they are simultaneous. But for the middle of the train inside the B, the situation is different, because B and high-speed train running together, so he will first intercept the front of the signal forwarded to him, and then received fromthe back of the optical signal. For the second thing, the two events are different. In other words, the simultaneity is not absolute, but depends on the observer's movement. This conclusion negates the absolute time and absolute spatial framework that is based on Newtonian mechanics.Relativity holds that the speed of light does not change in all inertial reference frames, it is the maximum velocity of the object movement. Due to the relativistic effect, the length of the moving object becomes shorter and the time of the moving object expands. But because of the problems encountered in daily life, the movement speed is very low (compared with the speed of light), do not see the relativistic effect.Einstein established the theory of relativity on the basis of the radical change of space and time, and pointed out that the quality increases with the increase of speed, and when the speed is close to the speed of light, the quality tends to infinity. He also gives the famous qualitative relationship: E = mc ^ 2, the qualitative relationship has played a guiding role in the later development of the atomic energy industry.The establishment of general relativity:In 1905, Einstein published the first article on the theory of special relativity (ie, "the dynamics of the dynamics"), and did not immediately aroused great repercussions. But the German physicist, Planck, took note of his essay that Einstein's work was comparable to that of Copernicus, and that relativity quickly became a subject of research and discussion, Einstein has also received the attention of academia.In 1907, Einstein listened to the proposal of the friend, submitted the famous paper to apply for the Federal University of the lecturer posts, but the answer is the paper can notunderstand. Although in the German physics Albert Einstein has been very famous, but in Switzerland, he was not a university faculty, many prestigious people began to injure him, in 1908, Einstein finally got the lecturer Of the post, and in the second year when the associate professor. In 1912, Einstein became a professor, in 1913, at the invitation of Planck as the newly established director of the Institute of Physics and the University of Berlin professor.During this period, Einstein, in considering the promotion of the relativity that had been established, had two questions that made him uneasy. The first is the gravitational problem, the special theory of relativity for mechanics, thermodynamics and electrodynamics of the physical law is correct, but it can not explain the gravitational problem. Newton's gravitational theory is superfluous, and the gravitational force between the two objects is transmitted instantaneously, that is, at an infinite velocity, which is contrary to the view of the relativistic field and the limit of the speed of light. The second is the problem of non-inertia, the special theory of relativity and the previous physical laws, are only applicable to the inertial system. But in fact it is difficult to find the real inertia. Logically, all natural laws should not be confined to the inertial system, must be considered non-inertial system. Narrative relativity is difficult to explain the so-called twins paradox, the paradox is that there is a pair of twin brothers, brother in the spacecraft near the speed of light to do cosmic voyage, according to the relativistic effect, high-speed movement of the clock slow, Come back, my brother has become very old, because the earth has gone through for decades. In accordance with the principle of relativity, spacecraft relative to the Earth's high-speed movement, the Earth is also high-speedmovement relative to the spacecraft, brother to see his brother become young, brother to see his brother should be young. This question simply can not answer. In fact, the narrow sense of relativity only deal with uniform linear motion, and brother to come back must go through a variable speed movement process, which is relativistic can not handle. Einstein is continuing to do the general theory of relativity when people are busy comprehending relatively specific relativity.In 1907, Einstein wrote a long article on the theory of relativity and the resulting conclusions on the theory of relativity, in which Einstein first mentioned the principle of equivalence, and since then, Einstein The idea of equivalence is evolving. Based on the natural law of inertial mass and gravitational mass as the basis of the equivalent principle, it is proposed that the uniform gravitational field in the infinite small volume can replace the reference frame of accelerating motion. Einstein and put forward a closed box saying: in a closed box of the observer, no matter what method can not determine whether he is still in a gravitational field, or in the absence of gravitational field in the space for accelerated movement , Which is the most commonly used to explain the principle of equivalence, and inertia quality and gravitational quality is equivalent to the principle of a natural reasoning.In November 1915, Einstein presented four papers to the Prussian Academy of Sciences. In these four papers, he proposed a new view of Mercury's recent point of advance and given the correct gravitational field equation. At this point, the basic problems of general relativity are solved, the general theory of relativity was born. In 1916, Einstein completed the long thesis "the basis of general theory of relativity", in this article, Einsteinfirst applied to the inertial system of relativity known as the special theory of relativity, will only for the inertial system of the same laws The principle is called the principle of narrow relativity, and further expresses the principle of general relativity: the law of physics must be established for any way in which the system of motion is established.Einstein's general theory of relativity holds that, due to the existence of matter, space and time will bend, and the gravitational field is actually a curved time and space. Einstein's theory of using the sun's gravity to bend the space is a good explanation for the 43 seconds that Mercury has been unable to explain in the past. The second largest predictor of general relativity is gravitational redshift, that is, in the strong gravitational field to the red side of the movement, 20 years, astronomers in astronomical observations confirmed this point. The third largest predictor of general relativity is that the gravitational field deflects the light and the gravitational field closest to the earth is the sun's gravitational field. Einstein predicted that distant stars would have a seven-second deflection if they swept through the sun. In 1919, under the encouragement of British astronomer Eddington, the British sent two expeditions to observe the total solar eclipse in two places. After careful study, the final conclusion was that the starlight did occur in the vicinity of the sun Seconds of deflection. The Royal Society and the Royal Astronomical Society officially read the observation report and confirm that the conclusion of general relativity is correct. "This is the most significant achievement of the theory of gravitation since the Newton era," said Einstein's theory of relativity, the greatest achievement of human thought, "said the famous physicist and president of the Royal Society,one". Einstein became a journalist, he wrote in 1916 a popular theory of relativity, "narrow and general theory of relativity", to 1922 has been republished 40 times, was translated into a dozen languages, widely spread.Relativistic significance:Since the establishment of the special theory of relativity and general relativity, it has been a long time, it has withstood the test of practice and history, is widely recognized as the truth. Relativity has a great influence on the development of modern physics and the development of modern human thought. Relativity from the logical thinking of the unity of the classical physics, so that the classic physics to become a perfect scientific system. On the basis of the theory of special relativity, the theory of relativity of the special relativity of Newtonian mechanics and Maxwell's electrodynamics system is unified. It is pointed out that they are obeying the principle of relativity and relativity, which is the covariance of Lorentz transformation. Newtonian mechanics is only a matter of low- A good approximation of law. On the basis of the generalized covariance, on the basis of the generalized covariance, the relationship between the local inertia length and the universal reference coefficient is established by the equivalent principle. The generalized covariant form of all physical laws is obtained, and the gravitational force Theory, and Newton gravitational theory is only its first approximation. This is fundamentally resolved before the physics is limited to the problem of inertia, from the logic to get a reasonable arrangement. Relativity examines the basic concepts of physics, time, space, material and movement, and gives a scientific and systematic view of time and space and material view, so that physics can become a perfect scientific system in logic.The special relativity gives the law of motion of the object under high speed motion, and suggests that the quality and energy are equivalent, and the qualitative relationship is given. These two outcomes are not obvious for low-speed macro objects, but in the study of microscopic particles showed extreme importance. Because the speed of micro-particles are generally faster, some close to even reach the speed of light, so the physics of particles can not be separated from the theory of relativity. The qualitative relationship not only creates the necessary conditions for the establishment and development of quantum theory, but also provides the basis for the development and application of nuclear physics.For Einstein's introduction of these new concepts, most of the physicists on earth, including the relativistic transformation of the founder of Lorentz, are unacceptable. Some people even said, "At that time the world only two and a half people understand the theory of relativity." The old method of thinking obstacles, so that this new physical theory until a generation after the majority of physicists are familiar with, even the Swedish Royal Academy of Sciences, in 1922 the Nobel Prize in Physics awarded to Einstein, only said "Because of his contribution to theoretical physics, but also because he found the law of the photoelectric effect." Einstein's Nobel Prize in physics awards even for Einstein's theory of relativity did not mention (Note: Relativity has not won the Nobel Prize, an important reason is the lack of a lot of facts to verify.)Photoelectric effectIn 1905, Einstein proposed photon hypothesis, successfully explained the photoelectric effect, so won the 1921 Nobel Prize in Physics.Light is irradiated onto the metal, causing the electrical properties of the material to change. This phenomenon of light transmission is called the photoelectric effect (Photoelectric effect).Photoelectric effect is divided into photoelectron emission, photoconductive effect and photovoltage effect. The former phenomenon occurs on the surface of the object, also known as external photoelectric effect. The latter two phenomena occur within the object, known as the photoelectric effect.Hertz discovered the photoelectric effect in 1887, and Einstein first succeeded in explaining the photoelectric effect (the effect that the metal surface emits electrons under the action of light, the electrons emitted are called photoelectrons). When the wavelength of light is less than a certain critical value, it can emit electrons, that is, the limit wavelength, and the frequency of the corresponding light is called the limit frequency. The critical value depends on the metal material, and the energy of the emitted electrons depends on the wavelength of the light and has nothing to do with the light intensity, which can not be explained by the fluctuation of light. There is also a contradiction with the volatility of light, that is, the instantaneous nature of the photoelectric effect, according to volatility theory, if the incident light is weak, the irradiation time is longer, the metal can accumulate enough energy, fly out of metal surface. The fact is that as long as the frequency of light is higher than the limit frequency of the metal, the brightness of the light, whether strong or weak, is almost instantaneous, and no more than ten negative nine. The correct explanation is that the light must be composed of a strictly defined energy unit (ie, photon or photon) associated with the wavelength.Photoelectric effect, the electron direction of the injection is not completely directional, but most are perpendicular to the metal surface, and the direction of light has nothing to do, just electromagnetic waves, but just high frequency oscillation of the orthogonal electromagnetic field, the amplitude is small, Electron emission direction.Conservation of energyE = mc ², material immortal law, that is the quality of the material immortal; energy conservation law, that is the material energy conservation.Although these two great laws have been discovered, but people think that this is the two irrelevant laws, each of the different laws of nature. Even some people think that material immortality is a law of chemistry, the law of conservation of energy is a physical law, they belong to different areas of science.Einstein believes that the quality of matter is a measure of inertia, energy is a measure of movement; energy and quality are not isolated from each other, but interrelated, inseparable. Changes in the quality of the object, will make the corresponding changes in energy; and changes in the energy of the object, but also the quality of the corresponding changes.In the special theory of relativity, Einstein put forward the famous qualitative formula: E = mc ^ 2 (where E represents energy, m represents how much quality, c represents the speed of light, the approximate value of 3 × 10 ^ 8m / s, which Indicating that energy can be created with reduced quality).Einstein's qualitative relationship formula, correctly explained the various nuclear reactions: take helium 4 (He4), its nuclei are composed of two protons and two neutrons. The quality of the helium 4 nucleus is equal to the sum of two protonsand two neutron masses. In fact, this arithmetic is not established, the quality of helium nuclei than two protons, two neutron masses and the sum of 0.0302u (atomic mass units)! Why is this? Because when the two deuterons (each deuterium Nucleus contains one proton, one neutron) is aggregated into a helium 4 nucleus, the release of a large number of atomic energy. When generating 1 g of helium 4 atoms, about 2.7 x 10 ^ 12 Joules of atomic energy are released. Because of this, the quality of the helium 4 nucleus is reduced.This example vividly shows that when two deuterium nuclei are polymerized into one helium-4 nucleus, it seems that the mass is not conserved, that is, the mass of the helium-4 nuclei is not equal to the sum of the two deuteron masses. However, with the mass-energy relationship formula, the loss of helium 4 nuclei is exactly the same as the mass that is reduced by the release of atomic energy during the reaction.Einstein from the height of the update, clarified the material immortality law and the law of the law of conservation of energy, pointed out that the close relationship between the two laws, so that human understanding of nature and a step further.Cosmic constantEinstein, in proposing the theory of relativity, introduced the cosmological constant (in order to explain the existence of a static universe with a material density of zero, he introduced a term proportional to the gauge tensor in the gravitational field equation, denoted by the symbol Λ. The constant is very small, and the scale of the galaxy is negligible. Only in the cosmic scale, Λ can be meaningful, so called cosmic constant, the so-called fixed value of the anti-gravitational value) into his equation. He believes that there is an anti-gravity, with gravity balance, topromote the universe is limited and static. When Hubble showed the astronomical observations of the expanding universe to Einstein, Einstein said, "This is the biggest mistake my life has ever committed.The universe is inflated. Hubble and so that the anti-gravity is not there, due to the gravity between the galaxies, to promote the expansion of more and more slowly. There is a twisting force between the galaxies that causes the universe to expand, that is, dark energy. 7 billion years ago, they "defeated" the dark matter, become the master of the universe. The latest research shows that the dark matter and dark energy account for about 96% of the universe, based on the mass composition (only the actual quality, not the virtual matter). It seems that the universe will continue to accelerate the expansion until the disintegration of death. (There are other claims, controversial). Although the cosmic constant exists, the value of the gravitational force is far more than the gravitational force. "I finally understand why he (Einstein) liked this theory so much that he still studied cosmological constants for many years, and cosmological constant is still one of the biggest questions of physics today," Linde said in a funny manner.。
模拟人生英文作文模板Title: Simulated Life: A Look into the Future。
Introduction。
In recent years, the concept of simulating human life has become increasingly popular. With the advancement of technology and artificial intelligence, the idea of creating a simulated world where individuals can experience life in a virtual environment has captured the imagination of many. In this essay, we will explore the potential benefits and drawbacks of simulating human life, as well as the ethical and moral implications of such a concept.The Potential Benefits of Simulated Life。
One of the most significant benefits of simulating human life is the opportunity to experience different realities and scenarios that may not be possible in the real world. For example, individuals could simulate living in different time periods, exploring different cultures, or even experiencing life on other planets. This could lead to a greater understanding and empathy for others, as well as a deeper appreciation for the diversity of human experience.Additionally, simulating human life could provide a safe environment for individuals to learn and grow. For example, students could practice real-life scenarios in a simulated environment, such as medical procedures or emergency response situations. This could lead to more effective training and improved skill development in various fields.The Drawbacks of Simulated Life。
Simultaneous Equations Models (1)The Nature of Simultaneous Equations Models● In the previous study, we considered single equation models – one Y and one or more X ’s.● The cause-and-effect relationship is from X ’s to Y .● There may be a two-way, or simultaneous, relationship between Y and X ’s.● It is difficult to distinguish dependent and explanatory variables.● Set up simultaneous equations model where variables are jointly dependent or endogenous.● Can not estimate the parameters of a single equation without taking into account of theinformation provided by other equations.● OLS estimator for single equation in simultaneous model is biased and inconsistent.● i i i i ii i i u X Y Y u X Y Y 21211222021111212101+++=+++=γββγββY 1i and u 2i are correlated, Y 2i and u 1i are correlated, so OLS leads to inconsistent estimates.Examples of Simultaneous Equations Models● Example1: Demand –and-supply modelDemand function: 0 1110<++=αααt t d t u P Q Supply function:0 1210>++=βββt t s t u P QEquilibrium condition: s t d t Q Q = Wheretime t supplied, Q demanded,quantity s t ===quantityQ d t● Price P and quantity Q are determined by the intersection of the demand and supply curves.Demand and supply curves are linear.● P and Q are jointly dependent.● the demand curve will shift upward if u 1t is positive and downward if u 1t is negative. ● A shift in the demand curve changes both P and Q● A change in u 2t will shift supply curve then change both P and Q● So u 1t and P, u 2t and P are correlated – violate the important assumption of CLRM. ● Example 2: Keynesian model of income determination● Consumption function: 10 1 10<<++=βββt t t u Y CIncome identity:)(t t t t S I C Y =+=Where C = consumption expenditure, Y = income, I = investment (assumed exogenous), S = savings, t = time, u = stochastic disturbance term● β1 is marginal propensity to consume(MPC) lying between 0 and 1.● C and Y are interdependent and Y is not expected to be independent of the disturbance term. ● Because U i shifts, then the consumption function also shifts, which, in turn, affects Y.The simultaneous equation bias: Inconsistency of OLS estimator● Use simple Keynesian model of income determination to show OLS estimator is inconsistentin simultaneous model● We want to estimate consumption function10t t t u Y C ++=ββ● First show that Y t and u t are correlated.Substituting consumption function into income identity:t t t u I Y 111011111ββββ-+-+-=t t I Y E 11011)(ββββ-+-=so12t 1]E[u)]()][([),cov(β-=--=t t t t t t u E u Y E Y E u Y● Second show that the OLS estimator 1ˆβis an inconsistent estimator of 1β, because of the correlation of Y t and u t∑∑∑∑∑∑∑∑+=++==---=21210221)( )())((ˆtttt ttt t t t t t tyy u y y u Y yy C Y Y Y Y C C ββββSo)(11 )/lim()/lim()ˆlim(2211211Yt t t N y p N u y p p σσββββ-+=+=∑∑● Plim(1ˆβ) will be always be greater than 1βThe Identification Problem● Recall the demand and supply model, if we have time-series data on P and Q only and noadditional information, can we estimate the demand function?● Need to solve the identification problem.Notations and Definitions● Take income determination model as example:Consumption function : 10 1 10<<++=βββt t t u Y CIncome identity : )(t t t t S I C Y =+=● - Endogenous variables: determined within the model- Predetermined variables: determined outside the model.- Predetermined variables include current and lagged exogenous variables and lagged endogenous variables.- Lagged endogenous variable is nonstochastic, hence a predetermined variable. - Be careful to defend the classification.●β’s are known as the structural parameters or coefficients.● Solve for endogenous variables to derive the reduced-form equations.● Reduced-form equation is the one which expresses an endogenous variable solely in terms ofthe predetermined variables and the stochastic disturbances.● Substitute consumption function into income identity:t t t w I Y +∏+∏=10Where 1t 111001 w ,11,1ββββ-=-=∏-=∏t uSubstitute income identity into consumption functiont t t w I C +∏+∏=32Where 1t 1131021 w ,1 ,1βββββ-=-=∏-=∏t u● 31 and ∏∏ are impact multipliers● Reduced form equations give the equilibrium values of the relevant endogenous variables.● The OLS method can be applied to estimate the coefficients of the reduced –form equations● Structural coefficients can be “retrieved ” from the reduced form coefficients.The identification problem● The identification problem is whether numerical estimates of the parameters of a structuralequation can be obtained from the estimated reduced form coefficients.● Identified, underidentified, exactly identified and overidentified.Underidentified● Consider the demand and supply model, together with market clearing condition.(Insert equations)● There are four structural coefficients corresponding two reduced form coefficients. – modelcan not be solved.● What does “underidentified ” mean? See figures● An alternative way to looking at the identification problem. – “mongrel ” equations. Ifmongrel equation is observational indistinguishable with demand function, then demand function is underidentified.Just , or exact, identification● Demand function: 0 ,0211210><+++=αααααt t t t u I P Q Supply function: 0 1210>++=βββt t t u P Q● There is an additional variable in the demand equation● Derive reduced form equations● Five structural coefficients corresponding with four reduced form coefficients – remainunderidentified.● Demand curve is underidentified, but supply curve is identified.● “mongrel ” equation is distinguishable from supply function but not from demand function.● The presence of an additional variable in the demand function enables us to identify thesupply function.● ConsiderDemand function : 0,0 211210><+++=αααααt t t t u I P QSupply function:0 ,0 21 21210>>+++=-βββββt t t t u P P QExactly identified!Overidentified● Demand function : 13210t t t t t u R I P Q ++++=ααααSupply function: 21210t t t t u P P Q +++=-βββ● Solvefor the structural equations and get reduced form:t t t t t v P R I P +∏++∏+∏+∏=-13210t t t t t w P R I Q +∏+∏+∏+∏=-17654● 7 coefficients corresponding 8 equations. Will have multiple solutions. For example:151201 ∏∏=∏∏=ββ● The reason for the multiple solution is that we have “too much ” information to identify thesupply curve.● “too much ” reflects by the exclusion of two variables in the supply function. One should beenough.Rules for identification● Solve structural equations, then get reduced form, check how many structural coefficientsand how many reduced form coefficients – no need for this time-consuming process● Order conditions of identificationM – number of endogenous variables in the modelm – number of endogenous variables in a given equation K – number of predetermined variables in the modelk – number of predetermined variables in a given equationA equivalent explanation is, in a model of M simultaneous equations, in order for an equationto be identified, the number of predetermined variables excluded from the equation must not be less than the number of endogenous variables included in that equation less 1. that is: K-k>= m-1● Check the previous examples.。
AxiomaPortfolio Optimizer The most flexible portfolio-construction tool on the marketAxioma Portfolio Optimizer supports a wide range of investment management approaches, from quantitative to fundamental. With virtually limitless objectives and an equally unlimited range of constraints, Axioma Portfolio Optimizer delivers maximum flexibility to model even the most complex strategies.Plus, Axioma Portfolio Optimizer incorporates a key unique feature: our patented Axioma Alpha Alignment methodology prevents the risk underestimation common to optimizers,while effectively managing uncertainty with Axioma’s Robust Optimization technology. And with its open platform, clientshave the freedom to choose any combination of Axioma Robust Risk Models and/or third-party and internal risk models for their portfolio construction process.Flexible is better…For index-tracking portfolios…• Take advantage of flexible options for risk control.Minimize the tracking error of your portfolio in theobjective or place a hard limit on tracking error using a riskconstraint.Use more than one risk model to incorporate severalperspectives on risk.Include risk elements that use more than one benchmarkor model portfolio.• Incorporate all the transaction-related costs that impactperformance. The transaction cost types available in theoptimizer can be used in any combination to accuratelyreflect overall costs.Capture market impact using non-linear market-impactmodels (quadratic, 3/2, and 5/3 powers) or a piecewise-linear approximation.Use the fully integrated Goldman Sachs Shortfall Model.Include commissions and brokerage fees using linearcosts.• Place explicit limits on trading activity.Limit overall portfolio turnover or limit turnover of a set ofassets.Place asset-specific limits on trading; for example, limittrade size to a fraction of average daily volume.• Control portfolio beta using Axioma-provided historicalbetas, predicted betas computed relative to any benchmark you choose, or betas you supply.• Limit the number of names held or traded.For actively managed portfolios…In addition to the risk and transaction cost controls available for passive portfolio management, the modeling library contains many options designed to get the most from your alpha signal.• Augment your risk control using the Alpha Factor. The Alpha Factor compensates for components of your alpha signalthat are not included in the risk model, providing moreaccurate predicted risk estimates.• Limit risk contributions at the factor or asset level.• Use Robust Optimization to explicitly incorporate uncertainty in your alpha estimates.• Incorporate additional tilts in your objective. Tilts can beimplemented on any factor, including risk factors, factorsfrom one of Axioma’s factor libraries, or user-suppliedfactors.• Prevent small positions and small trades with thresholdconstraints.For long/short portfolios…Manage your long/short portfolio, create a hedging basket, or create an overlay using Axioma’s extensive set of options for long/short optimization.• Control the long, short, and net exposure to any factor,including user-defined factors.• Fix the portfolio leverage or let the optimizer select the best leverage value within a range you specify.• Limit the ratio of long to short holdings in the total portfolio or over any subset of assets.• Capture trading costs unique to long/short portfolios.Include asset-specific borrowing costs to capturedifferences in the short rebate resulting from holding ashort position.Include asset-specific short sell costs to capture costsassociated with taking a short position.• Limit short selling at the asset, group, or portfolio level.• Prevent short positions in “hard to borrow” assets.• Limit the number of assets held on the long and short sideseparately.For global portfolios…Global portfolios present many challenges for optimizers. Asset universes are larger, there are often multiple assets in the universe from a given issuer, global risk models contain many more factors, and risk and exposure need to be managed over more dimensions.• Incorporate risk from the currency perspective of yourchoice.• Control exposures to countries and currencies, in addition to sectors, industries, and styles.• Accurately incorporate active specific risk for portfolios that hold multiple lines from the same issuer using Axioma’sIssuer Specific Covariance.• Track a benchmark holding foreign ordinaries by holdingdepository receipts.• Control holdings by issuer.Include “issuer bet” constraints that aggregate assetholdings by issuer.• Control trading by issuer:Limit the total buys and sells by issuer.Prevent simultaneous buying and selling of assets from thesame issuer.• Capture ticket charges for buys, sells or all transactions.Ticket charges can vary by country, exchange or on anasset-by-asset basis.• Set round lot values by country, exchange or individual asset.The Superior Features of Axioma Portfolio Deliver Superior Performance Modeling flexibility provided by powerful optimization algorithms…Axioma Portfolio uses Second Order Cone Programming (SOCP), a state-of-the-art approach capable of solving complex optimization problems exactly and efficiently. With Axioma Portfolio, you can move beyond simple mean-variance optimization scenarios to construct models that accurately capture all the complexities of your investment process.An open platform…The Axioma Portfolio platform is completely open. Use content from Axioma, a third party or your own research team to power your construction process.Incorporate ETFs, futures and other composite instruments…• Utilize integrated content from Axioma or create your own custom asset compositions.• Take advantage of full look-through to accurately control portfolio risks.• Control total exposure at the asset or factor level resulting from holding a combination of individual stocks and ETFs or futures.Diagnose infeasibilities quickly and easily with the Constraint Hierarchy…Axioma Portfolio’s Constraint Hierarchy provides a comprehensive approach to dealing with infeasibilities. Simply provide a priority for each constraint and when a conflict between constraints occurs, the Optimizer will provide a solution that is as close as possible to satisfying all the constraints. If a constraint is violated, the constraint with the lowest possible priority is always chosen.Use Constraint Attribution to understand the costs of your constraints…Axioma’s Constraint Attribution provides a dashboard view of the impact of individual constraints, objectives or groups of constraints on the objective function value (usually portfolio return) in your strategy. This tells you which elements of your strategy have an overall impact and which do not. It also provides a relative measure of the potential for improving the objective by relaxing different constraints.Explore strategy trade-offs with the Frontier…Frontier optimization explores the impact of varying either a constraint limit or an objective term weight. Used in conjunction with Constraint Attribution, it enables the user to validate the impact of relaxing constraints in the strategy. Objective frontiers can be used to create a classical mean variance frontier or to explore trade-offs, such as return vs. transaction costs or risk vs. tax liability. The Frontier automates the execution of a series of rebalancings covering a range of strategy settings. It provides a consolidated view of the results for each rebalancing enabling the user to select the optimum trade-off.Evaluate small changes to the optimized portfolio with the Buy/Sell Editor…The Buy/Sell Editor enables the user to evaluate the impact of changes to the portfolio holdings or optimized trade list. These changes can be created manually or by an automated process external to the Optimizer. The Buy/Sell Editor provides an easy-to-use interface for specifying these changes and evaluating their impact on the characteristics of the resulting portfolio. All portfolio analytics are accessible for portfolios created using the Buy/Sell Editor.Flexible backtesting…Add the Axioma Backtester module and utilize all the model building flexibility in the Optimizer for time series simulations. Like the Optimizer, the Backtester is an open platform enabling you to use content from any source you choose in your backtest analysis. Automate your portfolio construction processes…The execution of large batches of portfolio rebalancings can be automated and scheduled to take place at your convenience. Portfolios can share a common strategy, or each portfolio can have a unique set of optimization settings.Further customization and automation can be accomplished using one of Axioma Portfolio’s API interfaces. In addition to the GUI, there are APIs in C++, Java, MATLAB, and R. There is also an XML-based interface that enables easy integration with existing systems via a command-line interface.Minimum system requirements…Axioma Portfolio runs on Windows 200x, XP, or Windows 7. A Pentium 4 or better processor is recommended. For best performance, 4GB or more of RAM is recommended.For more information about Axioma*****************************or call us.United States and Canada: +1 212-991-4500Europe: +44 (0)20 7856 2424Asia: +852-8203-2790Axioma, Inc. is the leading provider of innovative risk-managementand portfolio-construction solutions to financial institutions worldwide.With an emphasis on innovation, outstanding service and open-platform technology, Axioma gives its clients more choice, greaterflexibility and increased performance. Founded in 1998, Axioma isheadquartered in New York, with offices in Atlanta, Geneva, HongKong, London, San Francisco and Singapore.© Copyright, Axioma, Inc. 2014 - All rights reserved。
Copyright © 2011 by Beiting Cheng, Ioannis Ioannou, and George SerafeimWorking papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of workingCorporate Social Responsibilityand Access to FinanceBeiting Cheng Ioannis Ioannou George SerafeimWorking Paper11-130CORPORATE SOCIAL RESPONSIBILITY AND ACCESS TO FINANCEBeiting ChengHarvard Business SchoolIoannis IoannouLondon Business SchoolGeorge SerafeimHarvard Business SchoolMay 18th, 2011AbstractIn this paper, we investigate whether superior performance on corporate social responsibility (CSR) strategies leads to better access to finance. We hypothesize that better access to finance can be attributed to reduced agency costs, due to enhanced stakeholder engagement through CSR and reduced informational asymmetries, due to increased transparency through non-financial reporting. Using a large cross-section of firms, we show that firms with better CSR performance face significantly lower capital constraints. The results are confirmed using an instrumental variables and a simultaneous equations approach. Finally, we find that the relation is primarily driven by social and environmental performance, rather than corporate governance. Keywords: corporate social responsibility, sustainability, capital constraints, ESG (environmental, social, governance) performanceI.INTRODUCTIONIn recent decades, a growing number of academics as well as top executives have been allocating a considerable amount of time and resources to Corporate Social Responsibility1 (CSR) strategies. According to the latest UN Global Compact – Accenture CEO study2 (2010), 93 percent of the 766 participant CEOs from all over the world declared CSR as an “important” or “very important” factor for their organizations’ future success. On the demand side, consumers are becoming increasingly aware of firms’ CSR performance: a recent 5,000-people survey3 by Edelman4revealed that nearly two thirds of those interviewed cited “transparent and honest business practices” as the most important driver of a firm’s reputation. Although CSR has received such a great amount of attention, a fundamental question still remaining unanswered is whether CSR leads to value creation, and if so, in what ways? The extant research so far has failed to give a definitive answer (Margolis and Walsh, 2003). In this paper, we examine one specific mechanism through which CSR may lead to better long-run performance: by lowering the constraints that the firm is facing when accessing funds to finance operations and undertake strategic projects.To date, many studies have investigated the link between CSR and financial performance, and have found rather conflicting results5. According to McWilliams and Siegel (2000), conflicting results were due to the studies’ “several important theoretical and empirical 1 Here, we follow a long list of studies (e.g. Carroll, 1979; Wolfe and Aupperle, 1991; Waddock and Graves, 1997; Hillman and Keim, 2001; Waldman et al., 2006) in defining corporate social responsibility as: “a business organization’s configuration of principles of social responsibility, processes of social responsiveness, and policies, programs, and observable outcomes as they relate to the firm’s social relationships” (Wood, 1991: p.693).2 “A New Era of Sustainability. UN Global Compact-Accenture CEO Study 2010” last accessed July 28th, 2010 at: (https:///sustainability/research_and_insights/Pages/A-New-Era-of-Sustainability.aspx)3Mckinght, L., 2011. “Companies that do good also do well”, Market Watch, The Wall Street Journal (Digital Network), last accessed April 11th, 2011 at: /story/companies-that-do-good-also-do-well-2011-03-234 Edelman is a leading independent global PR firm that has been providing public relations counsel and strategic communications services for more than 50 years. /5 Margolis and Walsh (2003) and Orlitzky, Schidt and Rynes (2003) provide comprehensive reviews of the extant literature.limitations” (p.603). Others have argued that the studies suffered from “stakeholder mismatching” (Wood and Jones, 1995), the neglect of “contingency factors” (e.g. Ullmann, 1985), “measurement errors” (e.g. Waddock and Graves, 1997) and, omitted variable bias (Aupperle, Carrol and Hatfield, 1985; Cochran and Wood, 1984; Ullman, 1985).In this paper, we focus on the impact of CSR on the firm’s capital constraints. By “capital constraints” we refer to market frictions6that may prevent a firm from funding all desired investments. This inability to obtain finance may be “due to credit constraints or inability to borrow, inability to issue equity, dependence on bank loans, or illiquidity of assets” (Lamont et al., 2001). Prior studies have suggested that capital constraints play an important role in strategic decision-making, since they directly affect the firm’s ability to undertake major investment decisions and, also influence the firm’s capital structure choices (e.g., Hennessy and Whited, 2007). Moreover, past research has found that capital constraints are associated with a firm’s subsequent stock returns (e.g. Lamont et al., 2001).There are several reasons why investors would pay attention to a firm’s CSR strategies. First, firm activities that may affect7 long-term financial performance are taken into account by market participants when assessing a firm’s long-run value-creating potential (Ioannou and Serafeim, 2010a; Groysberg et al., 2011; Previts and Bricker, 1994). Moreover, a growing number of investors use CSR information as an important criterion for their investment decisions – what is currently known as “socially responsible investing” (SRI). For example, in 2007 mutual funds that invested in socially responsible firms had assets under management of more than $2.5 and $2 trillion dollars in the United States and Europe respectively. In Canada, Japan 6Consistent with prior literature in corporate finance (e.g. Lamont et al., 2001), we do not use the term to mean financial distress, economic distress or bankruptcy risk.7 Margolis and Walsh (2003) perform a meta-analysis of the CSR studies and find that the link between CSR and financial performance is small, albeit it is positive and statistically significant.and Australia, the corresponding numbers were $500, $100 and $64 billion respectively (Ioannou and Serafeim, 2010a). Total assets under management by socially responsible investors have grown considerably in the last ten years in countries such as the United States, United Kingdom, and Canada. In addition, the emergence of several CSR rankings and ratings firms (such as Thomson Reuters ASSET4 and KLD), the widespread dissemination of data on ESG performance by Bloomberg terminals, as well as the formation of teams to analyze CSR data within large banks such as J.P. Morgan Chase and Deutsche Bank,8highlight the growing demand and subsequent increasing use of CSR information. Furthermore, projects like the Enhanced Analyst Initiative9(EAI) that allocate a minimum of 5 percent of trading commissions to brokers that integrate analysis of CSR data into their mainstream research has further increased investor incentives to incorporate CSR data in their analysis10. Finally, in several countries around the world, governments have adopted laws and regulations that mandate CSR reporting (Ioannou and Serafeim, 2011) as part of efforts to increase the availability of CSR data and bring transparency around nonfinancial performance.The thesis of this paper is that firms with better CSR performance face fewer capital constraints. This is due to several reasons. First, superior CSR performance is directly linked to better stakeholder engagement, which in turn implies that contracting with stakeholders takes place on the basis of mutual trust and cooperation (Jones, 1995). Furthermore, as Jones (1995) argues, “because ethical solutions to commitment problems are more efficient than mechanisms designed to curb opportunism, it follows that firms that contract with their stakeholders on the basis of mutual trust and cooperation […] will experience reduced agency costs, transaction costs8 Cobley, M. 2009. “Banks Cut Back Analysis on Social Responsibility”, The Wall Street Journal, June 11th 2009.9 An initiative established by institutional investors with assets totaling more than US$1 trillion.10 “Universal Ownership: Exploring Opportunities and Challenges”, Conference Report, April 2006, Saint Mary’s College of California, Center for the study of Fiduciary Capitalism and Mercer Investment Consulting.and costs associated with team production” (Foo, 2007). In other words, superior stakeholder engagement may directly limit the likelihood of short-term opportunistic behavior (Benabou and Tirole, 2010; Ioannou and Serafeim, 2011) by reducing overall contracting costs (Jones, 2005).Moreover, firms with better CSR performance are more likely to disclose their CSR activities to the market (Dhaliwal et al., 2011) to signal their long-term focus and differentiate themselves (Spence, 1973; Benabou and Tirole, 2010). In turn, reporting of CSR activities: a) increases transparency around the social and environmental impact of companies, and their governance structure and b) may change internal management practices by creating incentives for companies to better manage their relationships with key stakeholders such as employees, investors, customers, suppliers, regulators, and civil society (Ioannou and Serafeim, 2011). Therefore, the increased availability of data about the firm reduces informational asymmetries between the firm and investors (e.g. Botosan, 1997; Khurana and Raman, 2004; Hail and Leuz, 2006; Chen et al., 2009; El Ghoul et al., 2010), leading to lower capital constraints (Hubbard, 1998).In fact, the rapid growth of available capital through SRI funds in recent years (Ioannou and Serafeim, 2010a), and the corresponding expansion of potential financiers that base their investment decisions on non-financial information (Kapstein, 2001), may well be partially due to the increased transparency and an endorsement of the long-term orientation that firms with superior CSR performance adopt. In sum, because of lower agency costs through stakeholder engagement and increased transparency through nonfinancial reporting, we predict that a firm with superior CSR performance will face lower capital constraints.To investigate the impact that CSR has on capital constraints, we use data from Thompson Reuters ASSET411for 2,439 publicly listed firms during the period 2002 to 2009. Thompson Reuters ASSET4 rates firms’ performance on three dimensions (“pillars”) of CSR: social, environmental and corporate governance. The dependent variable of interest is the “KZ index”, first advocated by Kaplan and Zingales (1997) and subsequently used extensively by scholars (e.g. Lamont et al., 2001; Baker et al., 2003; Almeida et al., 2004; Bakke and Whited, 2010; Hong et al., 2011) as a measure of capital constraints.The results confirm that firms with better CSR performance face lower capital constraints. We test the robustness of the results, by substituting the KZ index with an indicator variable for stock repurchase activity, to proxy for capital constraints, and we find similar results. Importantly, the results remain unchanged when we implement an instrumental variables approach and a simultaneous equations model, mitigating potential endogeneity concerns or correlated omitted variables issues, and providing evidence for a causal argument. Finally, we disaggregate CSR performance into its three components to gain insight as to which pillars have the greatest impact on capital constraints. We find that the result is driven primarily by social and environmental performance.This paper contributes to both the theoretical and empirical literature on CSR. Although many studies have explored the link between CSR and value creation, few have focused on the crucial role that capital markets play as a mechanism through which CSR may translate into tangible benefits for firms (e.g. Derwall and Verwijmeren, 2007; Goss and Roberts, 2011; Sharfman and Fernando, 2008; Chava, 2010). We contribute to this literature by showing the impact that CSR has on the firm’s ability to access finance in capital markets.11ASSET 4 is widely used by investors as a source for environmental, social and governance performance data. Some of the most prominent investment houses in the world, such as BlackRock, use the ASSET 4 data. See: /content/financial/pdf/491304/2011_04_blackrock_appoints_esg.pdfFurthermore, this study sheds light on the core strategic problem: understanding persistent performance heterogeneity across firms in the long-run. We argue that differential ability across firms to implement CSR strategies, results in significant variation in terms of CSR performance which in turn, is directly linked to the firm’s ability to access capital. Differential access to capital implies variation in the ability of firms to finance major strategic investments, leading to direct performance implications in the long-run. In other words, by understanding the consequences of variability in CSR strategies we contribute towards understanding performance heterogeneity across firms in the long-run.The remainder of the paper is organized as follows. Section II discusses the prior literature linking CSR to value creation, and prior literature linking capital constraints with firm performance. Section III presents the theoretical argument and derives our main hypothesis. Section IV presents the data sources and the empirical methods. Section V presents the results and section VI provides a discussion of the findings, the limitations of the study and concludes.II.PRIOR LITERATURECorporate Social Responsibility and Firm PerformanceMany studies have investigated the link between CSR and financial performance, both from a theoretical as well as from an empirical standpoint. On the one hand, prior theoretical work rooted in neoclassical economics argued that CSR unnecessarily raises a firm’s costs, and thus, puts the firm in a position of competitive disadvantage vis-à-vis competitors (Friedman, 1970; Aupperle et al., 1985; McWilliams and Siegel, 1997; Jensen, 2002). Other studies have argued that employing valuable firm resources to engage in socially responsible strategies results insignificant managerial benefits rather than financial benefits to the firm’s shareholders (Brammer and Millington, 2008).On the other hand, several scholars have argued that CSR may have a positive impact on firms by providing better access to valuable resources (Cochran and Wood, 1984; Waddock and Graves, 1997), attracting and retaining higher quality employees (Turban and Greening, 1996; Greening and Turban, 2000), better marketing for products and services (Moskowitz, 1972; Fombrun, 1996), creating unforeseen opportunities (Fombrun et al., 2000), and gaining social legitimacy (Hawn et al., 2011). Furthermore, others have argued that CSR may function in similar ways as advertising does and therefore, increase overall demand for products and services and/or reduce consumer price sensitivity (Dorfman and Steiner, 1954; Navarro, 1988; Sen and Bhattacharya, 2001; Milgrom and Roberts, 1986) as well as enable the firm to develop intangible resources (Gardberg and Fomburn, 2006; Hull and Rothernberg, 2008; Waddock and Graves, 1997). Within stakeholder theory (Freeman, 1984; Freeman et al., 2007; Freeman et al., 2010), which suggests that CSR is synonymous to effective management of multiple stakeholder relationships, scholars have argued that identifying and managing ties with key stakeholders can mitigate the likelihood of negative regulatory, legislative or fiscal action (Freeman, 1984; Berman et al., 1999; Hillman and Keim, 2001), attract socially conscious consumers (Hillman and Keim, 2001), or attract financial resources from socially responsible investors (Kapstein, 2001). CSR may also lead to value creation by protecting and enhancing corporate reputation (Fombrun and Shanley, 1990; Fombrun, 2005; Freeman et al., 2007).Empirical examinations of the link between CSR and corporate financial performance have resulted in contradictory findings, ranging from a positive to a negative relation, to a U-shaped or even to an inverse-U shaped relation (Margolis and Walsh, 2003). According toMcWilliams and Siegel (2000), conflicting results were due to “several important theoretical and empirical limitations” (p.603) of prior studies; others have argued that prior work suffered from “stakeholder mismatching” (Wood and Jones, 1995), the neglect of “contingency factors” (e.g. Ullmann, 1985), “measurement errors” (e.g. Waddock and Graves, 1997) and, omitted variable bias (Aupperle et al.,, 1985; Cochran and Wood, 1984; Ullman, 1985).In this paper, we shed light on the link between CSR and value creation, by focusing on the role of capital markets, as a specific mechanism through which CSR strategies may translate into economic value in the long run. More specifically, we argue that better CSR performance leads to lower capital constraints, which in turn has a positive impact on performance. Accordingly, the following subsection briefly reviews prior literature on the link between capital constraints and firm performance.Capital Constraints and Firm PerformanceFirms undertake strategic investments to achieve competitive advantage and thus, superior performance. The ability of the firms to undertake such investments is, in turn, directly linked to the idiosyncratic capital constraints that the firm is facing. Therefore, to understand the link between capital constraints and performance we first focus on the impact of capital constraints on investments. The theory of investment was shaped by Modigliani and Miller's seminal paper in 1958, which predicted that “a firm's financial status is irrelevant for real investment decisions in a world of perfect and complete capital markets.” The neoclassical economists derived the investment function from the firm's profit-maximizing behavior and showed that investment depends on the marginal productivity of capital, interest rate, and tax rules (Summers et. al. 1981; Mankiw 2009). However, subsequent studies in equity and debtmarkets showed that cash flow (i.e. internal funds) also plays a significant role in determining the level of investment (Blundell et. al. 1990; Whited 1992; Hubbard and Kashyap 1992). Importantly, studies have shown that financially constrained firms are more likely to reduce investments in a broad range of strategic activities (Hubbard, 1998; Campello et al., 2010), including inventory investment (Carpenter et al., 1998) and R&D expenditures (Himmelberg and Petersen, 1994; Hall and Lerner, 2010), thus significantly constraining the capacity of the firm to grow over time.Another set of studies has explored the relation between capital constraints and firm entry and exit decisions. Using entrepreneurs' personal tax-return data, Holtz-Eakin, Joulfaian, and Rosen (1994a) considered inheritance as an exogenous shock on the individual’s wealth and found that the size of the inheritance had a significant effect on the probability of becoming an entrepreneur. A follow-up paper (Holtz-Eakin, Joulfaian, and Rosen 1994b) has shown that firms founded by entrepreneurs with a larger inheritance (thus, lower capital constraints) are more likely to survive. Aghion, Fally and Scarpetta (2007) develop a similar argument by using firm-level data from 16 economies, comparing new firm entry and their post-entry growth trajectory.Another stream of literature, that considers incumbents as well as new entrants, (see Levine (2005) for a review of relevant studies) argues that capital constraints tend to affect relatively more the smaller, newer and riskier firms and channel capital to where the return is highest. As a result, countries with better-functioning financial systems that can ease such constraints, experience faster industrial growth. Given the idiosyncratic levels of constraints faced by companies of various sizes, scholars started to look at capital constraints as an explanation for why small companies pay lower dividends, become more highly levered and grow more slowly (Cooley and Quadrini 2001; Cabral and Mata 2003). For example, Carpenterand Petersen (2002) showed that a firm's asset growth is constrained by internal capital for small U.S. firms, and that firms who are able to raise more external funds enjoy a higher growth rate. Becchetti and Trovato (2002) found comparable results with a sample of Indian firms, and Desai, Foley and Forbes (2008) confirmed the same relation in a currency crisis setting. Finally, Beck et al. (2005), using survey data of a panel of global companies, documented that firm performance is vulnerable to various financial constraints and small companies are disproportionately affected due to tighter limitations. In sum, the literature to date has revealed that seeking ways to relax capital constraints is crucial to the firm-level survival and growth, the industry-level expansion and the country-level development.III.THEORETICAL DEVELOPMENTBased on neoclassical economic assumptions that postulate a flat supply curve for funds in the capital market at the level of the risk-adjusted real interest rate, Hennessy and Whited (2007) argued that “a CFO can neither create nor destroy value through his financing decisions in a world without frictions”. However, because of market imperfections such as informational asymmetries (Greenwald, Stiglitz and Weiss 1984; Myers and Majluf 1984) and agency costs (Bernanke and Gertler 1989, 1990), the supply curve for funds is effectively upward sloping rather than horizontal12 at levels of capital that exceed the firm’s net worth. In other words, when the likelihood of agency costs is high (e.g. opportunistic behavior by managers) and the capital required by the firm for investments exceeds the firm’s net worth (and it is thus uncollateralized), lenders are compensated for their information (and/or monitoring) costs by charging a higher interest rate. The greater these market frictions are, the steeper the supply curve and the higher the cost of external financing.12 For a full exposition of the model, based on neoclassical assumptions, see Hubbard (1998), p. 195-198.It follows then that adoption and implementation of firm strategies that reduce informational asymmetries or reduce the likelihood of agency costs, can shrink the wedge between the external and the internal cost of capital by making the supply curve for funds less steep. Equivalently, for a given interest rate, the firm is able to obtain higher amounts of capital. Better access to capital in turn, favorably impacts overall strategy by enabling the firm to undertake major investment decisions that otherwise would have been unprofitable, and/or by influencing the firm’s capital structure choices (e.g., Hennessy and Whited, 2007).We argue that firms with better CSR performance face lower capital constraints compared to firms with worse CSR performance. This is because superior CSR performance reduces market frictions through two mechanisms. First, superior CSR performance is the result of the firm committing to and contracting with stakeholders on the basis of mutual trust and cooperation (Jones, 1995; Andriof and Waddock, 2002). Consequently, as Jones (1995) argues, “because ethical solutions to commitment problems are more efficient than mechanisms designed to curb opportunism, it follows that firms that contract with their stakeholders on the basis of mutual trust and cooperation […] will experience reduced agency costs, transaction costs and costs associated with team production” (Foo, 2007). More specifically, such agency and transaction costs include “monitoring costs, bonding costs, search costs, warranty costs and residual losses”, according to Jones (2005, p.422). In other words, superior stakeholder engagement may not only directly limit the likelihood of short-term opportunistic behavior (Benabou and Tirole, 2010; Ioannou and Serafeim, 2011), but it also represents a more efficient form of contracting (Jones, 1995), which in turn is rewarded by the markets.In addition, firms with superior CSR performance are more likely to disclose their CSR strategies by issuing sustainability reports (Dhaliwal et al., 2011) and are more likely to provideassurance of such reports by third parties, thus increasing their credibility (Simnett et al., 2009; Benabou and Tirole, 2010). In turn, reporting and assurance of CSR activities: a) increases transparency around the long-term social and environmental impact of companies, and their governance structure and b) may change internal management practices by creating incentives for managers to better manage their relationships with key stakeholders such as employees, investors, customers, suppliers, regulators, and civil society (Ioannou and Serafeim, 2011). The increased availability of credible data about the firm’s strategies reduces informational asymmetries leading to lower capital constraints (Hubbard, 1998). Moreover, because reporting of CSR activities can also directly affect managerial practices by incentivizing managers to focus on long-term value creation, CSR reporting reduces the likelihood of agency costs in the form of short-termism.Indicatively, we note that the rapid growth of available capital for investment through SRI funds in recent years (Ioannou and Serafeim, 2010a), and the corresponding expansion of potential financiers that base their investment decisions on non-financial information (Kapstein, 2001), may well be due, to an extent, to the increased availability of information about the firm, and the resulting investor endorsement of the long-term focus that firms with superior CSR performance adopt. For example, many SRI funds use a positive screening model in which they overweight firms with good CSR performance in their portfolio, or a negative screening model in which they exclude from their investment universe companies with bad CSR performance, or an ESG integration model in which they integrate ESG data into their valuation models. Under all these investment models, SRI funds fully incorporated non-financial information in their decision making, over and above the traditional use of financial information.In sum, firms with better CSR performance are more likely to face lower capital constraints because of reduced informational asymmetries and reduced agency costs. This implies the following hypothesis:Hypothesis: Firms with better CSR performance face fewer capital constraints.IV. DATA AND EMPIRICAL METHODSDependent Variable: The KZ index of capital constraintsTo measure the level of capital constraints we follow the extant literature in corporate finance (e.g. Lamont et al. 2001; Almeida et al., 2004; Bakke and Whited, 2010) and construct the KZ index 13, using results from Kaplan and Zingales (1997). Specifically, as reported in Lamont et al. (2001), Kaplan and Zingales (1997) classify firms into discrete categories of capital constraints and then employ an ordered logit specification to relate these classifications to accounting variables. In this paper, and consistent with the prior literature, we use their regression coefficients to construct the KZ index 14 consisting of a linear combination of five accounting ratios: cash flow to total capital, the market to book ratio, debt to total capital, dividends to total capital, and cash holdings to capital. Firms with higher values of cash flow to total capital, dividends to total capital, and cash holdings to capital, and with lower values for the market to book ratio and debt to total capital are less capital constrained. The intuition behind these variables is that firms with high cash flows and large cash balances have more internal funds to deploy for new projects and as a result they are less capital constrained (Baker et al., 2003). 13 A variety of approaches including investment-cash flow sensitivities (Fazzari et al., 1988), the Whited and Wu (WW) index of constraints (Whitted and Wu, 2006) and other sorting criteria based on firm characteristics have been proposed in the literature as measures of capital constraints. Here, we use the KZ index because it has been the most prevalently used measure in the literature to date (Hadlock and Pierce, 2010). 14 More specifically we calculate the KZ index following Baker, Stein and Wurgler (2003) as: -1.002 CF it /A it-1 -39.368 DIV it /A it-1 - 1.315 C it /A it-1 + 3.139 LEV it + 0.283 Q it , where CF it /A it-1 is cash flow over lagged assets; DIV it /A it-1 is cash dividends over lagged assets; C it /A it-1 is cash balances over assets; LEV it is leverage and Q it is the market value of equity (price times shares outstanding plus assets minus the book value of equity over assets. The original odered logit regression and full exposition of the index may be found in Kaplan and Zingales (1997).。
36th EPS Conference on Plasma Phys. Sofia, June 29 - July 3, 2009 ECA Vol.33E, P-4.192 (2009)Development of the Wendelstein Line towards a Helias ReactorH. Wobig2, T. Andreeva1∗, C.D. Beidler1, E. Harmeyer2, F. Herrnegger2, Yu. Igitkhanov3,J. Kißlinger2, M. Köppen1, F. Schauer11Max-Planck-Institut für Plasmaphysik, EURATOM Association, Wendelsteinstraße 1,D-17491 Greifswald, Germany; 2Max-Planck Institut für Plasmaphysik, EURATOM Association, Boltzmannstraße 2, D-85748 Garching, Germany; 3 Forschungs-zentrum Karlsruhe, Hermann-von-Helmholtz-Platz 1, D-76344 Eggenstein-Leopoldshafen, KarlsruheMain concept of the Wendelstein line and reactor design criteriaThe Helias (Helical Advanced Stellarator) reactor is based on the Wendelstein stellarator line and takes into account the design criteria of a power reactor. Wendelstein 7-X, which is now under construction at IPP Greifswald, Germany, is the largest machine in the Wendelstein series of experiments, which combines the achievements of previous physical investigations with modern technical innovations. The optimized properties of this stellarator [1] are defined by the shape of the last closed magnetic surface. The coil system was computed afterwards using the NESCOIL code developed by P. Merkel [2].The superconducting modular coils are the main technical components of the device and must reconcile the often conflicting physical and engineering demands placed upon them. In order to have an easy-to-handle tool for extrapolation of Wendelstein 7-X towards a power reactor, the code MODUCO (MODUlar COils) has been developed, which allows an interactive modification of the coil design, followed by a computation of magnetic surfaces as well as by a magnetic field evaluation inside the coils. A straightforward scaling of the Wendelstein 7-X experiment leads to the 5-field period reactor configuration HSR5/22 with minimized Pfirsch-Schlüter currents and a subsequently reduced Shafranov shift [3]. In order to reduce the size and the costs of the machine, also 4- and 3-period configurations, HSR4/18 and HSR3/15, were considered [4, 5]. All three options can be modelled with the code MODUCO (Table 1, HSR5M, HSR4M, HSR3M based on extrapolated Wendelstein 7-X coil shapes) and serve as a basis for further optimization studies.Modelling of the coil geometry with the code MODUCOMODUCO represents coils in terms of cubic Bézier curves. By varying their control points the impact of modifications on the structure of the magnetic surfaces can be easily∗E-mail:***********************.deinvestigated. This gives a possibility to represent a large spectrum of coil shapes with a fixed number of parameters, and can be used for stellarators as well as for tokamaks.The basic principle of coil representation is the following: the core of the coil is a closed curve which is called the central filament. This central filament consists of several segments which are described by cubic Bézier curves with a continuous tangent vector (Fig. 1). A minimum of four such segments is needed to represent a closed coil. Each of the segments is defined by 4 points in space P1, P2, P3, P4 where P1 and P4 are the endpoints of the curve and P2, P3define the tangent vector at the endpoints (neighbouring segments have identical tangents at the common point). A larger number of segments offers the opportunity to introduce additional flexibility and to increase the quality of the model. A satellite curve having the same structure as the central filament defines the orientation of the coil cross-section. It was found that 4 segments are sufficient to model a large class of modular coils: D-shaped tokamak coils, l=2 stellarator, Heliac, Heliotron-J. More complex coil geometries require an increase of the number of segments. Fig. 2 depicts HSR5M configuration modelled with the approach described above.Extrapolation of Wendelstein 7-X towards a Helias reactorCoils of Wendelstein 7-X can be represented accurately by cubic Bézier curves with five segments, which assures their further scaling and study at reactor dimensions. Due to the optimization, the vacuum and finite-beta magnetic fields in Wendelstein 7-X differ only slightly. That is why an initial modelling of the main parameters of a stellarator reactor can be done by proper shaping of the vacuum configuration.In addition to coil geometry, the code MODUCO computes magnetic field lines and magnetic surfaces of a given coil system. Besides B, the gradient ∇B is calculated, and together with the normal vector of magnetic surfaces the geodesic curvature of field lines is given. These values can be used to optimize physical parameters such as parallel current density and radial drift of guiding centres. For this purpose a Biot-Savárt approach is sufficient where the coils are represented by the central filaments.Calculation of magnetic fields inside the winding packs (Fig. 3), which determines the choice of the superconductor, requires more elaborate methods, taking into account the finite size of the coils. This can be done, for example, using the EFFI approach [6], where the winding pack is decomposed in a set of straight rectangular beams, and the magnetic field of these segments is given in terms of analytic functions.Regarding the impact of the Bézier approximation on the shape of magnetic surfaces, the main difference was found in the size of the islands at the boundary, while the difference for closed magnetic surfaces is insignificant. Fig. 4 represents comparative simulation results for the rotational transform and Pfirsch-Schlüter currents. The rotational transform on the axis is (0) = 0.86 in Wendelstein 7-X and (0) = 0.88 in HSR5M, which is based on the Bézier approximation. Such basic optimization features of Helias configurations as the reduction of Pfirsch-Schlüter currents and Shafranov shift are nearly unaltered by the cubic approximation of the central filaments. The magnetic well is around -1.0% for both configurations.High performance superconductors which are compatible with the field strength requirements of fusion reactor coils have been developed continuously over the years. Advanced Nb3Sn as well as more strain resistant Nb3Al have been successfully brought to a level suitable for large-scale applications [7-9] and can be considered as a serious option for a Helias reactor. It is even conceivable that by the time of the design of a demo-reactor high temperature superconductors will be available to be used at temperatures on the order of 20 K and above [10]. Therefore, the induction at the coils of a Helias reactor can be safely increased to 13 T. Previous studies, based on NbTi at 1.8 K, gave a limitation of 10 T on the coils and 5 T on the plasma axis. Both conductor variants are represented in Table 1 by an example of HSR5M. With an advanced superconductor the field in the plasma can be increased, and, hence, the confinement properties can be improved. One also has the additional advantage of operating the magnet at significantly higher temperatures with correspondingly better cooling efficiencies and higher heat capacities of the materials, and/or the option to decrease the coil cross-sections in order to gain more space. For the Helias, a first coil design iteration could be based upon the composition of the ITER toroidal field coil [7]. The maximum field of a Helias reactor will probably not be limited by the superconductor characteristics but by the mechanical structure required to support the enormous electromagnetic forces.ConclusionsIt has been shown that the representation of various reactor coils can be done with sufficient accuracy using cubic Bézier curves. For Wendelstein 7-X it is enough to use five Bezier segments to reconstruct the modular coil shapes. The new simulation tool, MODUCO, realises this approach and computes important plasma parameters as well as the magnetic field outside and inside the coils, which can be used as input for further structural analyses. The modelling of the 5-period Helias reactor configuration with the MODUCO codeconfirmed the persistence of the main optimization principles used for Wendelstein 7-X. It was estimated that the magnetic field inside the coil winding pack stays below acceptable limits for the superconductor. The flexibility and easy handling of the MODUCO code makes it a versatile tool for a practical simultaneous optimization of physics and engineering parameters, in particular, for compact Helias rector versions, such as HSR3 or HSR4.References1. G. Grieger et al., Fusion Technol. 21 (1992) 17672. P. Merkel , Nucl. Fusion 27 (1987) 8673. H. Wobig et al., Nucl. Fusion 43 (2003) 8894. C.D. Beidler et al., Nucl. Fusion 41 (2001) 17595. T. Andreeva et al., Fusion Science Technol. 46 (2004) 3956. S. J. Sackett, LLNL report UCRL-52402 (1978)7. C. Sborchia et al., IEEE Trans. Appl. Supercond. 18 (2008) 4638. T. Takeuchi et al., Cryogenics 48 (2008) 3719. N. Koizumi et al., Nucl. Fusion 45 (2005) 431 10. J. Schwartz et al., IEEE Trans. Appl. Supercond. 18 (2008) 70 P 3 P 4 2 satellite curve central filament tangent vector W 7-X HSR5M HSR4MHSR3M R [m] 5.5 22 18 15 a [m] 0.51.8 1.951.52 on axis 0.86 0.88 0.77 0.54 LCMS 0.96 0.96 0.80 0.60 <B> on axis [T] 3.0 5.0 5.5 5.0 5.0 B max on coils [T] 6.6 10 12 10 10 superconductor NbTi NbTi Nb 3Al NbTi NbTi Table 1 Various Helias configurations, modelled with MODUCO. Fig. 2 HSR5M, modelled with MODUCO. Fig. 3Fig. 1 Coil approx. with Bézier curves.。
练习19阅读理解+读后续写Ⅰ.阅读理解King's to sit on the Student Advisory Board for V olunteering. The Board plays an important role in influencing the development and delivery of the volunteering service. Its aim is to find volunteering opportunities for students, teachers and other staff at school.V olunteering, which has the potential to create positive social impact, is a key priority for King's. The Board will ensure that students' needs are at the core of the service so that all King's students can make a difference.This year, the Student Advisory Board for V olunteering will cocreate the agenda, and potential topics include:•V olunteering opportunities for the disabled.•Promotion of the Board's service.•How to recognize and reward volunteers.•Evaluation of the Board's work.This is a unique and exciting opportunity open to all studying at King's, regardless of how much or how little you have volunteered. The Board will be run both online and in person, and board members will be paid the London Living Wage (£per hour) via King's Talent Bank.The application deadline is 23:59 on Friday, 31st March 2023.Apply via the form below, using your King's email address.1.What does the Board aim to do for those at King's?A.Perform voluntary work.B.Meet their academic needs.C.Provide learning opportunities.D.Discover volunteering chances.2.Which of the following is a possible topic for this year's agenda?A.Running the Board online.B.Training voluntary workers.C.Assessing the Board's performance.D.Taking care of people with disabilities.3.Who can apply to be a member of the Board at King's?A.The students. B.The teachers.C.The graduates. D.The working staff.and halfway through our sentence we're interrupted. But was that really an interruption? The answer depends on whom you ask, according to new research led by Katherine Hilton from Stanford University.Using a set of controlled audio clips (录音片段), Hilton surveyed 5,000 American Englishspeakers to better understand what affects people's perceptions of interruptions. She had participants listen to audio clips and then answer questions about whether the speakers seemed to be friendly and engaged, listening to one another, or trying to interrupt.Hilton found that American English speakers have different conversational styles. She identified two distinct groups: high and low intensity speakers. High intensity speakers are generally unfortable with moments of silence in conversation and consider talking at the same time a sign of engagement. Low intensity speakers find it rude to talk at the same time and prefer people to speak one after another in conversation.The differences in conversational styles became evident when participants listened to audio clips in which two people spoke at the same time but were agreeing with each other and stayed on topic, Hilton said. The high intensity group reported that conversations where people spoke at the same time when expressing agreement were not interruptive but engaged and friendlier than the conversations with moments of silence in between speaking turns. In contrast, the low intensity group perceived any amount of simultaneous (同时的) chat as a rude interruption, regardless of what the speakers were saying.“People care about being interrupted, and those small interruptions can have a massive effect on the overall munication,” Hilton said. “Breaking apart what an interruption means is essential if we want to understand how humans interact with each other.”4.What does Hilton's research focus on?A.What interruptions mean to people.B.Whether interruptions are good or not.C.How to avoid getting interrupted.D.Why speakers interrupt each other.5.What do participants of the study need to do?A.Record an audio clip.B.Answer some questions.C.Listen to one another.D.Have a chat with a friend.6.What do low intensity speakers think of simultaneous chat?A.It's important. B.It's interesting.C.It's inefficient. D.It's impolite.7.What can we learn from Hilton's research?A.Human interaction is plex.B.munication is the basis of life.C.Interruptions promote thinking.D.Language barriers will always exist.Ⅱ.读后续写[2023·石家庄高三质检]阅读下面材料,根据其内容和所给段落开头语续写两段,使之构成一篇完整的短文。
Shadow play, also known as shadow puppetry (Chinese shadow puppetry), is an ancient form of storytelling and entertainment which uses flat articulated cut-out figures (shadow puppets) which are held between a source of light and a translucent screen or scrim. The cut-out shapes of the puppets sometimes include translucent color or other types of detailing. Various effects can be achieved by moving both the puppets and the light source. A talented puppeteer can make the figures appear to walk, dance, fight, nod and laugh.Shadow play is popular in various cultures, among both children and adults in many countries around the world. More than 20 countries are known to have shadow show troupes. Shadow play is an old tradition and it has a long history in Southeast Asia, especially in Indonesia, Malaysia, Thailand and Cambodia. It has been an ancient art and a living folk tradition in China, India and Nepal. It is also known in Egypt, Turkey, Greece, Germany, France and the United States.皮影戏皮影戏(Shadow puppetry),又称“影子戏”或“灯影戏”。
第 12 卷第 12 期2023 年 12 月Vol.12 No.12Dec. 2023储能科学与技术Energy Storage Science and Technology十二水磷酸氢二钠复合相变材料制备及应用于大棚降温的节能效果肖强强1, 2,孙佳康1,唐洪达1,张林华1,刁乃仁2,李辉1(1山东建筑大学热能工程学院;2山东中瑞新能源科技有限公司,山东济南250101)摘要:温室大棚利用温室效应原理,维持植物适宜的生长环境。
当光照强烈时,大棚内的温度可能超过植物正常的生长温度,此时需要采取适当的降温措施。
本工作将相变储热技术应用于温室大棚,采用被动控温原理降低大棚的峰值温度,减少主动降温的能耗。
选择十二水磷酸氢二钠为相变材料,添加成核剂和增稠剂后制得复合相变材料以改善水合无机盐的过冷和稳定性问题,使用步冷曲线法、差示扫描量热仪、同步热分析仪等测试材料性能。
将复合相变材料封装成板,装配到温室大棚中,在实际光照条件下测试相变板对大棚内部热环境的影响。
结果表明,4%质量分数的九水硅酸钠可以将十二水磷酸氢二钠的过冷度由12 ℃降低至0.5 ℃;黄原胶作为增稠剂,当质量分数为5%时,材料循环稳定性良好;复合材料的相变温度和潜热分别为34.2 ℃和194.5 J/g,适合用于温室大棚。
连续24 h的温度检测显示大棚的峰值温度相比未装配相变板的参考大棚降低3.5 ℃,峰值时间推迟32 min,高温总时长相比参考大棚明显缩短。
因此,十二水磷酸氢二钠复合相变材料应用于温室大棚可以起到改善大棚高温环境,降低制冷能耗的作用。
关键词:温室大棚;相变材料;十二水磷酸氢二钠;降温能耗doi: 10.19799/ki.2095-4239.2023.0483中图分类号:TK 02 文献标志码:A 文章编号:2095-4239(2023)12-3635-08Preparation and energy-saving effects of disodium hydrogen phosphate dodecahydrate composite phase-change materialapplied in greenhouse coolingXIAO Qiangqiang1, 2, SUN Jiakang1, TANG Hongda1, ZHANG Linhua1, DIAO Nairen2, LI Hui 1(1School of Thermal Engineering, Shandong Jianzhu University; 2Shandong Zhongrui New Energy T echnology Co., Ltd.,Jinan 250101, Shandong, China)Abstract:Greenhouses exploit the principle of greenhouse effect to maintain an optimal growth environment for plants. Under intense sunlight, the temperature inside greenhouses may exceed a suitable growth temperature; therefore, appropriate cooling measures must be implemented. In this study, phase-change energy storage technology is applied to greenhouses by exploiting passive temperature control to reduce the peak temperature of the greenhouse and thereby reduce energy consumption. Disodium hydrogen phosphate dodecahydrate (DHPD) was used as the phase change material (PCM). After the addition of nucleating agents and收稿日期:2023-07-17;修改稿日期:2023-08-08。
Revaluation of the Chinese Yuan and triad trade:A gravity assessment Miaojie Yu*China Center for Economic Research(CCER),National School of Development,Peking University,Beijing100871,China1.IntroductionAfter China acceded to the World Trade Organization(WTO)in2001,China’s exports increased dramatically.The annual export growth rate was20.1%during2002–2007.China’s exports to Japan and the United States,the two largest trading economies in the world,also grew very quickly.Specifically,China’s exports to the U.S.increased from U.S.$53.2billion to U.S.$232.7billion,a29.5%annual growth rate during this period.By way of comparison,China’s exports to Japan increased from U.S.$48.4billion to U.S.$102.0billion,a14.9%annual growth rate during this period.Simultaneously,the exchange rate of Chinese Yuan(RMB)against U.S.dollar changed by around20%during this period due to revaluation.As shown in Fig.1, after the RMB’s revaluation against U.S.dollar in2005,the proportion of China’s exports to the pared with China’s overall export volume followed a downward trend whereas that of China’s exports to Japan continued to decrease.It is therefore interesting to ask whether the revaluation of the RMB revaluation is reduced bilateral trade among China,Japan and the U.S.This paper seeks to understand the endogenous nexus between the movements of the bilateral exchange rates and bilateral trade among the triad:China,Japan and the U.S.The intuition seems straightforward:the increase in RMB valuation against the U.S.dollar resulted in more expensive Chinese exports to the U.S.,which in turn decreased China’s exports to the U.S.However,there is a more fundamental mechanism underlying this conventional wisdom:the bilateral exchange rate is not exogenous itself.Surging Chinese exports could result in strong pressure to protect markets raised by the import-competing special interest groups in the importing country.Accordingly,the government in the importing country would push the exporting country to revalue its exchange rate.Put another way,exports have a reverse causality on bilateral exchange rates.Ignoring this fact may make estimation results imprecise.Journal of Asian Economics xxx(2009)xxx–xxxA R T I C L E I N F OArticle history:Received21June2009Received in revised form8September2009Accepted24September2009JEL classification:F1F3Keywords:Bilateral exchange rateBilateral exportsGravity modelA B S T R A C TThe literature had paid little attention to the endogenous nexus between exchange ratesand bilateral trade.In this paper,I use a gravity model to investigate the two-way causalitybetween exchange rates and bilateral trade with data from China,Japan,and the UnitedStates during the2002–2007period.After controlling for the simultaneous bias betweenexchange rates and bilateral trade,the extensive empirical evidence shows that therevaluation of the Chinese Yuan against the dollar significantly reduced China’s exports tothe United States but had no significant effects on China’s exports to Japan.Thesefindingsare robust to different measures,econometric methods,and period coverage.ß2009Elsevier Inc.All rights reserved.*Tel.:+861062753109;fax:+861062751474.E-mail address:mjyu@.Contents lists available at ScienceDirectJournal of Asian Economics1049-0078/$–see front matterß2009Elsevier Inc.All rights reserved.doi:10.1016/j.asieco.2009.09.008Previous studies have paid little attention to this two-way causality.Most only mention one of the two causal connections.Some works focus on the impact of the bilateral exchange rate on bilateral trade,especially through the pass-through effect of the exchange rate (Goldberg &Knetter,1997).When the nominal bilateral exchange rate is changed,it has a pass-through effect on the price of the imports,which in turn would affect bilateral trade.Previous studies like Feenstra (1989)find empirical evidence that the effect of the bilateral exchange rate on bilateral trade is like that of a tariff.Bergin and Feenstra (2009)also explored how a change in the share of U.S.imports from a country with a fixed exchange rate like China could affect the pass-through of the exchange rate to import prices in the U.S.On the other hand,a variety of papers consider the determinants of the bilateral exchange rate (Meese &Rose,1991).Bilateral trade,among others,is one of the most important determinants.As mentioned above,in considering the endogenous nexus between these two variables,studies on a one-way causality would lead to simultaneous bias.It is understood that an identical change of exchange rate could have different effects on industry-specific bilateral exports.The economic rationale is as follows.Some industries in China (e.g.,manufacture of special purpose machinery)may have higher productivity and hence enjoy a larger profit margin when they access to the foreign markets.When RMB appreciates,such industries can still earn positive profits.In contrast,some other industries (e.g.,manufacture ofapparel,Fig.1.Bilateral trade and the exchange rate:China and Japan.CEIC Database (2008).M.Yu /Journal of Asian Economics xxx (2009)xxx–xxx2footwear and caps)only have slight profit margins when they entry to the export markets (Yu,2008).Many firms within such industries would die and exit from the export market (Melitz,2003).Therefore,an identical revaluation of RMB would induce different effects on bilateral exports across industries.Inspired by the theoretical industry-specific gravity model presented in Yu (in press),I am able to use the gravity model to capture the effect of bilateral exchange rate on industry-specific bilateral trade.This work adds to a growing literature on China’s trade and exchange rate,including work done by,among others,Thorbecke and Zhang (2006),Thorbecke and Smith (in press),Mann and Plueck (2007),Kwack,Ahn,Lee,and Yang (2007),and Marquez and Schindler (2007).By using Johansen maximum likelihood and dynamic OLS approaches,Thorbecke and Zhang (2006)found that the estimated real exchange rate in the long run for exports and imports between China and the U.S.is around a unit.They also rationalize that the appreciation of the RMB would help to reduce the bilateral Sino-U.S.trade deficit.By extending the dataset to include 33China’s trading partners,Thorbecke and Smith (in press)found that a 10%RMB appreciation leads to a decrease in 4%of processing exports and 12%of ordinary exports.Similarly,Marquez and Schindler (2007)also distinguished the role of processing trade from that of the ordinary trade.By way of comparison,Mann and Plueck (2007)emphasized that the price elasticities for American imports from China have an expected sign by using dynamic panel empirical specification.Following standard literatures,all of these works take bilateral exchange rate as given.I therefore estimate the bidirectional causality between the bilateral exchange rate and bilateral trade using data from China,Japan and the U.S.during the period 2002–2007.My first estimation equation in the system is the bilateral export equation.Since the gravity model is successful in explaining the growing trade volume since World War II (Feenstra,2003),I therefore use an augmented gravity equation to estimate the effect of the bilateral exchange rate on exports.My second estimation equation is the bilateral exchange rate equation,which takes various determinants into account.I use a three-stage-least-squares (3SLS)estimation to take full account of the joint correlations of the error terms between the two equations.Overall,I find that the revaluation of the RMB against the U.S.dollar reduces China’s exports to the United State whereas there is no significant impact on China’s exports to Japan.Simultaneously,the effect of exports on the bilateral exchange rate is insignificant for both the Sino-U.S.and the Sino-Japan cases.Various robustness checks confirm these findings by using different measures of exchange rates,econometric methods,and data period coverage.The rest of this paper is organized as follows.Section 2introduces the evolution of China’s bilateral exchange rate and its trade with Japan and the U.S.Section 3discusses the determinants of both bilateral trade and the exchange rates.The main estimation results and robustness checks are presented in Section 4.Section 5concludes the paper.2.China’s exchange rate and triad tradeAccording to China’s Statistical Yearbook (2008),the bilateral trade between China and Japan increased dramatically since 2002.After China acceded to the WTO in 2001,the bilateral trade volume (i.e.,exports plus imports)between China and Japan reached U.S.$101.9billion,with a 16.1%annual growth.Japan was China’s largest trading partner in 2002:the bilateral trade volume accounted for 16.5%of China’s overall trade volume,which was higher than 12.9%with the U.S.Since then,the average growth rate of Sino-Japanese bilateral trade has been about 25%.In 2006,bilateral trade between China and Japan reached U.S.$207.4billion,which accounted for 11.7%of China’s overall trade volume.This volume is smaller than the Sino-U.S.trade volume,worth U.S.$262.7billion,making Japan China’s second largest trading partner in the world since 2004.In the overall whole trade volume,Japan has maintained a modest trade surplus with China in the new century.The bilateral trade imbalance was U.S.$5billion in 2002.China then became Japan’s largest source for imports for a share of 18.3%of Japan’s total import volume,which is higher than 17.1%from the U.S.The imbalance gap has widened over time.In 2006,China had a trade deficit with Japan worth U.S.$24billion,which accounted for around 12%of the overall bilateral trade volume.According to reports by China’s General Administration of Customs and the Department of Commerce in the U.S.,Sino-U.S.bilateral trade also increased rapidly after China acceded to the WTO.Simultaneously,China maintained a huge bilateral trade surplus with the U.S.In 2004,the bilateral trade was worth U.S.$161billion.More importantly,the Multi-Fiber Agreements,which set an upper bound for textile exports from China to the U.S.in the Uruguay round of the GATT,were automatically terminated in January 2005.Accordingly,China’s textile exports to the U.S.increased dramatically soon after that.In 2005,the trade imbalance gap widened to around U.S.$200billion.Due in part to appreciation,of the RMB China’s bilateral trade surplus with the U.S.reduced from U.S.$232billion in 2006to U.S.$213billion in 2007.In 2008,the Sino-U.S.trade volume did not increase by very much because of the stronger RMB and the shrinking demand in the United States caused by the financial crisis.However,China still maintained a U.S.$170billion trade surplus with the United States,accounting for 57.8percent of China’s total trade surplus.Due to the surge in the Sino-U.S.bilateral trade,special interest groups,such as labor unions,in the United States appealed to the ernment by arguing that China had manipulated its currency at a unreasonable level.They argued that China had a serious real exchange rate misalignment such that China could maintain a huge bilateral trade surplus.In response to the demand by special interest groups,the U.S.congress threatened to impose trade sanctions on China if China did not ‘‘voluntarily’’restrain its exports to the U.S.,or revalue the RMB.To avoid a possible trade war,China adjusted its fixedM.Yu /Journal of Asian Economics xxx (2009)xxx–xxx3exchange rate against the U.S.dollar,which had been adopted for one decade.In July 2005,the RMB against the dollar was revalued by 2%.It was no longer solely pegged to U.S.dollar but it was pegged a basket of currencies including,among others,the U.S.dollar and the Japanese Yen.Since then,the RMB was continuously revalued.In the next three years,the RMB against the dollar was revalued by around 20%from 8.3to 6.8RMB per dollar.3.Review of related theoryThis section specifies the bilateral trade equation,the exchange rate equation,and the simultaneous bilateral export and exchange rate equations.3.1.The determinants of bilateral tradeFeenstra (2003)highlighted three reasons that explain the growing bilateral trade since World War II:growing GDP,declining transportation costs,and trade liberalization.The gravity model is expected to be the only successful model to explain the growing trade volume.It is easy to understand that the GDP growth of two trading partners plays a significant role in determining their bilateral trade.The gravity model suggests that larger countries trade more since they produce more commodities.1Also,two countries trade more if the sizes of their economies are similar (Helpman,1987).Later,Anderson and van Wincoop (2003)provided a solid theoretical micro-foundation for the typical gravity model by carefully introducing the ‘‘multilateral trade resistance’’term,which specifies the implicit price indices in the gravity equation.Traditional wisdom suggests that international trade agreements foster international trade.After a 15-year long march,China successfully acceded to the WTO as its 143rd member in 2001.The impact of WTO accession on the Chinese economy has been substantial.Some researchers like Woo (2001)argued that WTO accession was a key component to reconstruct the Chinese economy.At the very least,the accession to WTO helped China enjoy a larger foreign market,which in turn fostered exports.Besides multilateral trade agreements,trade liberalization,such as tariff reduction and non-tariff barriers,is important to bilateral trade growth.Shortly after it began its economic reforms,China set up a whole system of tariffs in the 1980s.After the Uruguay Round of the WTO,China experienced huge tariff reductions due,in large part,to its eagerness for WTO accession.China cut its tariffs from 35%in 1994to around 17%in 1997.In 2001,China further cut its average tariff rate from 16.4%to 15.3%.Equally importantly,the bilateral exchange rate plays another key role in bilateral trade.Previous studies like Feenstra (1989)argued that there is a symmetric response of import prices to changes in an import tariffs and the bilateral exchange rates.This hypothesis is supported by Japanese and the U.S.industrial data.The economic rationale for the effect of the exchange rate on bilateral trade seems straightforward.A change in the nominal bilateral exchange rate has a pass-through effect on import prices.Accordingly,both of these changes affect the bilateral trade.Put another way,an appreciation in the real exchange rate,which is defined as an increase in the relative price of tradable to non-tradable goods,would lead to a decrease in bilateral trade.Of the particular interest in this paper is the effect of movements in China’s exchange rate on its bilateral trade with two other giants:Japan and the U.S.To estimate its effect,I control GDP of China and its trading partner in the estimations inspired by the gravity model.I drop data before 2002to avoid the structural change in Chinese economy caused by China’WTO accession in 2001.Also,to keep the model neat,the usual variables of transportation costs such as bilateral geographic distance are captured in an error term in the empirical model.23.2.The determinants of the exchange rateAs summarized by Meese and Rose (1991),there are five models to explain the determinants of nominal exchange rates:a flexible-price monetary model,two sticky-price models,and two Lucas-type (1982)models.In all of these models,the bilateral spot exchange rate (e j )is determined,at least,by both the nominal domestic (i.e.,China)money supply relative to foreign money supply (M CH /M j )and domestic industrial production relative to the foreign industrial production (Y CH /Y j ).These common variables gain special theoretical support in Lucas’s (1982)model of a two-good,two-country,pure exchange economy.Shortly after that,Hodrick (1988)extended Lucas’s (1982)and Svensson’s (1985)models to include the change in the relative money growth rate in the model to capture the timing of money market transactions.The other three models have different extensions to the benchmark setup introduced by Lucas (1982).In particular,the flexible-price monetary model includes a nominal interest differential since it assumes that the purchasing power parity (PPP)still holds when the home country faces an exogenous real exchange rate shock.In contrast,the two sticky-price-type1Though not inspired by theoretical literature,it may be worthwhile to include trading partners’GDP per capita in the gravity equation (Carre`re,2006).2The inclusion of various bilateral geographic variables did not change my estimation results since such variables will be dropped automatically in the two-way fixed-effects estimations.M.Yu /Journal of Asian Economics xxx (2009)xxx–xxx4models emphasize that the adjustment of goods prices is lower than that of asset prices.Therefore,one of the sticky-price-type models assumes that the real interest differential,measured by the difference between interest rates and inflation,is included in the estimation.Another sticky-price-type model instead argues that the relative cumulated trade balance (TB j )is an appropriate explanatory variable.That is,e j ¼f TB j ;M CH M j ;Y CHY jþerror ;(1)where the bilateral nominal exchange rate e j is measured as China’s price of a unit of domestic exchange.Since my main interest in this paper is to explore the endogenous nexus between bilateral trade and bilateral exchange rates,I therefore adopt specification (1)to capture the effect of China’s bilateral trade on its exchange rate.It is also worthwhile to point out that the stick-price-type model above typically applies to developed economics with open capital market.China,as the largest developing economy,currently still has capital control.Put another way,capital control may still play a role to determine China’s exchange rate.In specification (1),capital control is included in the error term as omitted variable,which reinforce the endogeneity of the bilateral exchange rate.We will address this endogeneity issue shortly.3Previous research on real exchange rates takes special interest in the extent of its misalignment.It is usually believed that there exists an equilibrium exchange rate in which both internal and external balances are achieved.The gap between the estimated equilibrium and the actual exchange rate is the so-called real exchange rate misalignment (Hinkle &Montiel,1999;Williamson,1994).There are two major approaches to identifying the misalignment (Zhang,2001).One of them is based on the idea that the equilibrium concept is derived from the macroeconomic balance.Based on this,the misalignment is measured either by PPP or the black market exchange rate.Another approach is the so-called Behavioral Equilibrium Exchange Rate (BEER):the equilibrium exchange rate is determined by a variety of explanatory variables of economic fundamentals.Since my objective in this paper is to estimate the effect of the exchange rate on trade,I do not attempt to measure the misalignment of China’s real exchange rate.However,I use the real exchange rate as another indicator of the exchange rate to estimate its effect on bilateral trade.3.3.Empirical methodologySince my main interest of the present paper is to explore the endogenous nexus between bilateral trade and bilateral exchange rate,it is worthwhile to conduct the unit root tests to check whether these two variables have unit autogregessive roots or deterministic time trends.4I therefore construct the following specification to perform a unit-root test on AR (p )model:D Y t ¼m þu Y t À1þX p j ¼1a j D Y t Àj þn t ;(2)where Y t denotes two key variables,respectively:China’s industry k 0s exports to country j in year t (X CH jkt )and China’s bilateralexchange rate with country j in year t (e jt ).To fully explore the time-series properties of the data,I then take a step forward to check whether these two variables are cointergreated by using Johansen’s (1995)maximum likelihood test.In particular,I use the trace statistic and the maximum eigenvalue statistic to perform the diagnostic tests for the cointegration of this two series.Several information criteria such as Akaike’s (1974)information criterion (AIC),the final prediction error (FPE)criterion,Hannan and Quinn’s (1979)information criterion (HQIC),and Schwarz’s (1978)Bayesian information criterion (SBIC)are used to determined the appropriate number of lags to use in the vector autoregressions.In addition,I also check whether the error term (n t )has whiteness properties.As mentioned above,an identical line of exchange rate could generate heterogenous effects on industry-specific bilateral exports.Hence,in this paper I adopt a panel dataset to explore such a characteristic.Accordingly,I introduce the following simultaneous equation model (SEM)for the panel estimations:ln X CH jkt ¼Àb 0þb 1e jt þb 2ln Y CHkt þb 3ln Y jkt þh k þ’t þm kt ;e jt¼Àg 0þg 1ln X CH jkt þg 2ln Y CH kt þg 3ln Y jkt þg 4ln M CHt þg 5ln M jt þe kt :(3)In Eq.(3),the bilateral export is a main determinant of bilateral exchange rate.This is inspired by the theoretical discussionstated above:the bilateral trade balance is one of important explanatory variables of the bilateral exchange rate.53I gratefully thank a referee for pointing this out.4I gratefully thank a referee for suggesting this point.5Note that bilateral import is not specified as a regressor but is absorbed into the error term of (3)given that my main interest of the present paper is to discover the endogenous nexus between bilateral export and bilateral exchange rate.Of course,to fully explore the effect of bilateral exchange rate on bilateral trade imbalance,one can go further to estimate the effect of bilateral exchange rate on bilateral imports,which is a possible topic for future research.M.Yu /Journal of Asian Economics xxx (2009)xxx–xxx5In addition,the new variables in the bilateral exchange rate Eq.(3)M CHt and M jt are China and its trading partner j ’smonetary bases,respectively.6I also include,though not listed in the equations above,the j -period time lag of the exchange rate e jt Àl in both equations as robustness checks.Following Feenstra (1989),the expected exchange rate in each quarter is a log-linear function of the current and past three quarterly average spot rates.As theoretically recognized by Anderson and van Wincoop (2003),standard gravity estimations on bilateral trade could suffer from the bias caused by ‘‘multilateral trade resistance,’’which measures the implicit price indices in the gravity model.Such multilateral trade resistance could be varied by industries and time as well given that the effect of bilateral exchange rate on bilateral trade are different across industries as introduced above.Hence,when the dataset is a panel,the regular OLS estimates are biased if the trade resistance is ignored.7To control for multilateral resistance among trading partners,inspired by Rose and van Wincoop (2001),I use fixed effects to control for other unobservable features within each industry of the trading partners over time.In particular,h k captures the unobserved industry-specific time-invariant fixed-effects whereas w t is the time-varying fixed-effects.Since the samples are quarterly data,both year-specific and quarter-specific fixed effects are included to completely capture the time-specific fixed-effects.Turning to the exchange rate equation,in addition to bilateral trade,the bilateral exchange rate is affected by trading partners’GDP and monetary base,as inspired by the sticky-price-type models.The error terms (m kt ,e kt )are a bivariate residual vector.In this SEM,the coefficients b 1and g 1consider the simultaneous feedback from bilateral trade and the exchange rate,which are my main interests.Since the error terms in the SEM are generally correlated with the dependant variables,the conventional methods such as Ordinary Least Squares (OLS)and Generalized Least Squares (GLS)are inconsistent.In this case,the 3SLS,as a full-information likelihood approach,is appropriate to take the error-term correlations between the two equations into account (Wooldridge,2002).4.Data,econometrics,and resultsIn this section,I first describe the dataset used in the paper,followed by a discussion of the Sino-U.S.estimations and the Sino-Japanese estimations.I then address the possible endogeneity problem.Finally,I close the section with various robustness checks.4.1.DataMy data coverage is from the first quarter of 2002to the last quarter of 2007.8The economic rationale of focusing on this window is that China’s economy was significantly affected by its WTO accession in 2001whereas China’s exchange rate was stable at that time.Since my objective in this paper is to estimate the effect of the exchange rate on bilateral trade,I therefore drop observations before 2001to avoid the possible structural change caused by the WTO accession shock.I use log directional industrial imports of the U.S.(Japan)from China to measure bilateral trade among China,Japan and the U.S.9This is because directional imports are consistent with the prediction of the gravity model,which only emphasizes one-way trade flow (Baldwin &Taglioni,2006).It is recognized that there is a mismatch problem between using data on China’s exports and American imports due to China’s re-export (via Hong Kong)problem (Feenstra &Hanson,2004):Exports from China via Hong Kong are counted as American imports from China but they are not counted as China’s exports to the U.S.To be consistent with previous works using the standard gravity model,I use American import data to measure Sino-U.S.trade.In addition,I use quarterly average rates to measure the bilateral nominal spot exchange rate.In this way,I can avoid the daily random error caused by adopting spot rates instead (Feenstra,1989).Unless specified,all data are from the CEIC database,which is publicly available.10The directional import data is at the SITC two-digit level.Trading partners’GDPs are measured in constant U.S.dollars.Data on American GDP is disaggregated by sectors (NAICS)and are available from the Bureau of Economic Analysis (BEA).China’s producer price index can be accessed from China’s Statistical Yearbook (2008)by National Bureau of Statistics of China.The American PPI is obtained from the Bureau of Labor Statistics.Similarly,I get the data on Japanese PPI (base year:2003)from the Bank of Japan.All other Japanese data are from the CEIC database as introduced above.Finally,as usual,I use M1to measure the monetary base.6Of course,money supply could be endogenous in the sense that bilateral exchange rate could be reversely affect money supply.However,in the present paper I only care about the correlation between bilateral exchange rate and money supply in trading partners since the latter variables are just control variables in the Eq.(3).7This omitted variable bias (i.e.,omitted terms correlated with the trade-cost term)was called the ‘‘gold medal’’of classic gravity model mistakes (Baldwin &Taglioni,2006).8Due to the restriction of data availability,I am not able to update the data till the last quarter of 2008,which would be a possible extension for future research.9Different from using data on goods and service trade as generally discussed in Section 2,I use industrial trade data only for estimations due to data availability.I thank Rachel McCulloch for pointing this out.10Data source: .M.Yu /Journal of Asian Economics xxx (2009)xxx–xxx6。
2A real explosion:The requirement of steam explosion pretreatment3Zhengdao Yu a ,Bailiang Zhang a,⇑,Fuqiang Yu c ,Guizhuan Xu a ,Andong Song b4aCollege of Mechanicaland Electrical Engineering,Henan Agricultural University,Zhengzhou 450002,China 5bCollege of Life Sciences,Henan Agricultural University,Zhengzhou 450002,China 678101112131415161718"Without EPD the severity factor19cannot describe the entire SE process 20alone.21"Explosion speed will cause22qualitative change in the effect of 23biomass pretreatment.24"Only the real explosion will satisfy 25the two requirements of consistency.262830a r t i c l ei n f o 31Article history:32Received 17March 201233Received in revised form 16June 201234Accepted 16June 201235Available online xxxx36Keywords:37Pretreatment38Steam explosion pretreatment 39Steam pretreatment40Explosion power density 41Severity factor42Catapult explosion mode 43Consistency4445a b s t r a c t46The severity factor is a common term used in steam explosion (SE)pretreatment that describes the com-47bined effects of the temperature and duration of the pretreatment.However,it ignores the duration of the 48explosion process.This paper describes a new parameter,the explosion power density (EPD),which is 49independent of the severity factor.Furthermore,we present the adoption of a 5m 3SE model for a cata-50pult explosion mode,which completes the explosion within 0.0875s.The explosion duration ratio of this 51model to a conventional model of the same volume is 1:123.The comparison between the two modes 52revealed a qualitative change by explosion speed,demonstrating that this real explosion satisfied the 53two requirements of consistency,and suggested a guiding mechanism for the design of SE devices.54Ó2012Elsevier Ltd.All rights reserved.555657 1.Introduction58Lignocellulosic biomass consists primarily of three polymer 59types:cellulose,hemicellulose and lignin.The structure of the sur-60rounding lignin matrix is known to prevent the biological degrada-61tion of cellulose and hemicellulose into simple sugars (Liu,2011).62In order to facilitate cellulose hydrolysis,biomass is commonly 63treated with a process known as steam explosion (SE),a hydrolytic 64pretreatment that releases the constitutive components of bio-65mass,thereby increasing the enzyme and solvent accessibility of 66cellulose (Glasser and Wright,1998).Since the development of this 67process by Mason (1928),SE pretreatment has been applied exten-68sively in the conversion of biomass (Lipinsky,1981;Bungay,1982;69Ng et al.,1983;Hooper and Li,1996;Guo et al.,2011;Sipos et al.,702010;Viola et al.,2008;Horn et al.,2011).SE pretreatment in-71volves exposing the biomass to high-temperature pressurized 72steam for a short period before rapidly releasing the pressure in 73an explosive decompression event (Glasser and Wright,1998).74Scholars suggest that the treatment efficiency is a product of0960-8524/$-see front matter Ó2012Elsevier Ltd.All rights reserved./10.1016/j.biortech.2012.06.055Corresponding author.Tel./fax:+8603922416900.E-mail address:bailiangzhang@ (B.Zhang).Q1Q2Please cite this article in press as:Yu ,Z.,et al.A real explosion:The requirement of steam explosion pretreatment.Bioresour.Technol.(2012),/10.1016/j.biortech.2012.06.0557576 77 78 79 80 81 82 83 84 85 86 87 88 8990 9192 93 94 95 96 97 9899mal energy into mechanical energy(Focher et al.,1998;Kessler 100et al.,1998;Angles et al.,2001).Thus,the theoretical explosion 101power density(EPD)is founded on the hypothesis of the adiabatic 102expansion process:103P e¼ðD H sþD H lþD H mÞ=ðtÃVÞ105106where‘D H s’,‘D H l’and‘D H m’signify the enthalpy drop of steam,li-107quid water,and material,respectively,‘t’represents the explosion 108duration,and‘V’represents the volume of the explosion reactor. 109Compared with the severity factor,which expresses the severity 110of the boiling phase,the EPD expresses the severity of the explo-111sion phase.Together,they describe the entire SE process.‘D H’is 112determined by the change in temperature before and after the 113explosion phase.The duration time‘t’is often neglected;in many 114cases‘t’is defined by the structure of the pretreatment equipment 115and the treatment pressure.Different equipment commonly causes 116the value of‘t’to increase by orders of magnitude.Two SE test 117models will be presented for comparison.118 parison of two models119First,we examined the two temporal components of the explo-120sion.The explosion duration is always composed of two interre-121lated time periods:(1)steam deflation passage(valve)open time 122and(2)steam deflation(pressure balance)time.Thefirst is mea-123sured or calculated according to the movement velocity of concrete 124devices.The theoretical approach of the second component,illus-125trated in Fig.1,can be calculated with equations describing gas 126mechanics(adiabatic processes),gasflow,and sound velocity of 127adiabatic processes.128Here,we present two vastly different SE pretreatment models. 129Thefirst model adopts the classical structure in valve blow mode 130(Turn et al.,1998),which is widely used at present(Fig.2A).There 131is also a second mode that enables the continuous feeding by a 132screw conveyer(Bender,1979;Brown,1980).However,this mode 133is not discussed in this paper because its process differs greatly 134from SE principles,its energy consumption is too high to apply 135to the industrial processes,and it is still in the pilot plant scale 136phase(Glasser and Wright,1998;Avella and Scoditti,1998).137 The second model adopts a structure(Fig.2B)in catapult explo-138 sion mode that is principally composed of a cylinder and piston.In139 the steam boiling phase,the two parts are tightly coupled(Fig.2C).140 During the explosion,the piston,driven by three pneumatic linear141 actuators,bursts out of the cylinder,a process that is equivalent to142 a vessel suddenly fracturing into two halves(Fig.2D).In the pro-143 cess of the opening piston,the piston rapidly accelerates due to144 the kinetic energy of the steam and material,as well as the force145 from the devices driving the process.When the stroke of the piston146 reaches one fourth of the cylinder diameter,the effective gas defla-147 tion passage–the exposed area between the piston and cylinder–148 reaches the same area as the cylinder cross section.Hence,the149 duration of thefirst period is equivalent to the duration of the pis-150 ton stroke.151 Both of the test models utilize saturated steam under the same152 conditions:a temperature of493K(220°C)and an absolute pres-153 sure of2.32MPa.According to these conditions,we calculated the154 deflation time of each model(Table.1).The results coincide with155 measurements found on the product lines(Movie.S1-S2).156 The enormous variation in the deflation time between the two157 SE pretreatment modes(approximately123-fold in this case)is de-158 rived from two structural factors.The primary factor is the differ-159 ence in the proportion of the steam deflation passage area to the160 volume of the vessel;the minor factor is the different opening161 styles of the deflation passage.This fact suggests that the deflation162 time is principally determined by the intrinsic structure of the SE163 equipment.164 In valve blow mode,the valve opening speed is really one of the165 most important factors.Especially for scale-up application,large-166 scale valves opening speed is restricted due to the engineering de-167 sign reason.The different diameter ball valves have the different168 opening speeds,such as2inches for0.8s,4inches for1.0s,8169 inches for1.5s and12inches for2.0s approximately(supplied170 by manufacturers).This suggests that the valve opening speed will171 be gone down with the increase of the valve diameter.However,172 most of the time is spent in the gas release process.173 Yet in catapult explosion mode,the valve opening speed trans-174 forms into the pistons movement speed in the cylinder.By theQ3Please cite this article in press as:Yu,Z.,et al.A real explosion:The requirement of steam explosion pretreatment.Bioresour.Technol.(2012),http:// /10.1016/j.biortech.2012.06.055175differential pressure in the catapult process,the large diameter pis-176ton can get equal proportion of driving force to the small diameter 177piston,and so achieve the same speed breaking away from cylin-178der.Therefore,‘‘the opening speed of valve’’in scale-up application 179of catapult explosion can remain consistent with the small 180equipment.181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224and triggered the switch of the piston drive device.The explosion 225was completed within 0.0875s.2262.2.4.Observation by scanning electron microscopy (SEM)227Scanning electron micrographs of untreated and pretreated 228maize stalk were captured using FEI Quanta 200.Prior to SEM eval-229uation,the samples were coated with platinum using the plasma 230sputtering technique.The metallization conditions were 0.05mbar 231and 30mA,and the observation conditions were an acceleration 232voltage of 15kV and a distance of 20mm.All the observations 233were made at the same magnification to see the whole fiber and 234the structure of the fracture surface.Some samples were observed 235at higher magnification to see the lignin droplets.2362.2.5.The comparison about pretreatment duration237The EPD concept and the catapult process introduced in this 238manuscript have been just explored and applied in recent 6years 239by nearly 40universities and research institutions in China,of 240which about traditional steam explosion mode compared toFig.2.The structure diagram of two SE pretreatment models.(A)A model in valve blow mode.The volume of the model is approximately 5000L.It adopts a 4-inch pneumatic ball valve as the release valve,which has an effective gas deflation passage area of 6280mm 2.(B)A model in catapult explosion mode.The volume of the model is 5120L,and the inner diameter of the cylinder is 1500mm.The piston driven by three tailored pneumatic linear actuators.The cylinder and piston set installed in the shadowed area.(C)The status of the piston in steam boiling phase.(D)The status of the piston as the explosion proceeds.Table 1The deflation times of the two models (s).1st Test model (valve blow mode)2nd Test model (catapult explosion mode)Steam deflation passage open time (measured)0.50.051Steam deflation time (calculated)10.2580.0365Sonic phase 6.9430.0247Subsonic phase 3.3150.0118Sum10.7580.0875Please cite this article in press as:Yu ,Z.,et al.A real explosion:The requirement of steam explosion pretreatment.Bioresour.Technol.(2012),/10.1016/j.biortech.2012.06.055241catapult mode is only some laboratory data and no formal release 242or papers published in international academic journals by now.But 243according to these data(Table.2),we can show the new steam 244explosion mode application,which represents the EPD great rise,245brought the process data great change,especially in pretreatment 246duration.Meanwhile,the comparison of the severity factor lost 247the basic guiding significance.248It must be pointed out that there is not a substitute relationship 249between them,but the expression of the two stages in a complete 250steam explosion process.Just when the EPD varies greatly,it can 251make log R 0less guiding significance.252 3.Results and discussion 253 3.1.Results254In catapult explosion mode,the EPD of the SE process can reach 255106Wm À3when the pressure exceeds 1.5MPa.For example,the 256minimum value of EPD only included ‘D H s ’in the experiment with 257the temperature of 220°C,resulting in an EPD of approximately 25828.3MWm À3.In most of the experiments in catapult explosion 259mode,an apparent cooling phenomenon of the SE pretreated mate-260rial is the strongest evidence that the thermal energy has been effi-261ciently converted into mechanical energy.This phenomenon can 262result in a temperature reduction from 220to 50°C or lower.This 263is also an apparent difference between the two modes.Assuming 264equal energy consumption,the deflation time must approach zero 265to maximize the EPD and the mechanical energy converted from 266thermal energy.Thus,the highest efficiency would be achieved 267by lower temperatures and shorter durations.It is also important268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300the addition of a little gunpowder,while fireworks cannot operate 301properly with significant quantities of gunpowder because the en-302ergy release process needs to be sustained longer than gunpowder 303allows.304Secondly,the deflation time should be shorter than the pressure 305balance time between the inside and outside of the pretreated bio-306mass internal structures,which varies with the biomass species.307Steam with a powerful seepage force will penetrate the entirety 308of an organic tissue structure before reaching a balanced pressure 309state (Focher et al.,1998).Given a rapid enough decompression,310most of the steam and hot liquid water in the biomass will quickly 311expand and break free of the structure.Thus,the internal structure 312of the biomass is disrupted by a mechanical shearing force.In a 313slow decompression,the pressure will have time to equalize across 314the structure,resulting in a much smaller shearing force,suggest-315ing that differential time results in differential pressure.3163.2.2.Two meanings of consistency317Another important requirement in the definition of SE pretreat-318ment is consistency,which has two meanings.The first describes 319the consistency of the external physical impact over the entire 320course of the explosion phase.The second describes the consis-321tency of the SE pretreatment effect between the experimental 322equipment and the industrial production equipment.The former 323ensures that the SE pretreated material has uniform batch bio-324chemical characteristics and prevents variable impacts on the 325digestibility.The latter ensures that experimental results can be 326dependably applied to industrial production with uniform and 327standardized effects.328With regard to the two meanings of consistency,we can com-329paratively analyze the two SE pretreatment modes.Given a valve 330blow mode,the time of steam deflation in tens of seconds will3.The structural diagram of QBS-80SE test bed.(A)The operation of the steam supply valve.(B)The layout of some components in relation to the simulation of valve blow mode.Please cite this article in press as:Yu ,Z.,et al.A real explosion:The requirement of steam explosion pretreatment.Bioresour.Technol.(2012),/10.1016/j.biortech.2012.06.055340341342343344345346347348Table 2The comparison data in some application fields.Application fieldsPretreatment objectTraditional SE/steam Catapult explosion Comparison of biochemical treatment resultsDuration/pressure (m/°C)log R 0EPD appr.(MWm-3)Duration/pressure (s/°C)log R 0EPD appr.(MWm-3)Liquor Brewing Food crops 120/120 2.668Close to zero 10/210 2.46135.2Catapult is more optimal HempDegumming Hemp 15/1602.9430.026120/2203.83428.3Being essentially equal GarbageDecontaminationMunicipal solid waste120/180 4.4350.05290/220 3.70932.4Catapult is more optimal Z.Yu et al./Bioresource Technology xxx (2012)xxx–xxx5Please cite this article in press as:Yu ,Z.,et al.A real explosion:The requirement of steam explosion pretreatment.Bioresour.Technol.(2012),/10.1016/j.biortech.2012.06.055349length is maintained.Certainly it is undeniable that millisecond 350differences in explosion duration still exist between experimental 351equipment and industrial equipment in catapult explosion mode. 352 3.2.3.Significance issues353The concern of this manuscript is a new steam explosion pro-354cess indicator and a new steam explosion process mode.The key 355of this way is the new physical parameters and physical phenom-356enon in the process different from the past.We expected catapult 357steam explosion to provide a new opportunity which would im-358prove the effect of pretreatment.However,it does not mean a con-359firmed positive result now.Also,the necessity of EPD proposition is 360not that it can help the researcher to improve on the digestibility 361through the current evidences right now,but provide a different 362perspective with which we observe steam explosion process and 363quantify the difference between the traditional technology and 364the new process.Thus these present the new research methods 365and ideas for this area researcher.In addition,energy conservation 366is also another important practical significance of the new 367technology.368 plexity issues369Although the catapult equipment structure is really more com-370plex than the previous structure,the valve blow mode,on some le-371vel,increased the complexity of the operation and process in scale-372up application.First,non-uniform batch biochemical characteris-373tics and extra presoak process cause process complexity.For in-374stance,the most common‘‘raw’’phenomenon in SE traditional 375pulping process will bring cumbersome operation for the next pro-376cess.Second,because the valve blow mode does not realize the 377effective heat-work conversion process,the temperature of the 378pretreated material in valve mode is near100°C.In most applica-379tions,this temperature goes beyond the temperature for the fer-380mentation or operation in next step.Therefore there need extra 381cooling reactor and heat exchanger to reduce the temperature of 382the material blown up.These make the whole process more com-383plex.Third,since the equipment in valve blow mode is manufac-384tured according to the standard of pressure vessel,the charging 385inlet size is limited strictly.This constraint known to greatly re-386strict the kinds of material,particle size and charging speed will 387prolong the period of feeding and raise the risk of inlet blocked 388up.As a result,the charging operation becomes more complex 389and dependent on manpower.Finally,as the small size of the 390valves,most of the valve blow equipments in the actual production 391are easy to form the bypass phenomenon in the process of material 392blown up.So there often remains a lot of remainder in the reactor. 393This made it difficult for the actual continuous production and the 394fermentation process.These problems cause operation complexity. 395 3.2.5.Cost issues396For most applications,the processing time is shortened from 397900–1200s to90–120s.This means the unit volume processing 398capacity of equipment is improved by10times,which is actually 399the key factor to reduce the cost of the equipment.Although the 400equipment structure is really more complex than the previous 401structure,but according to the same production capacity,cost is 402still decreased.In addition,the unit volume decrease also allows 403the design for higher working pressure.This broadens the steam 404explosion process pressure range in scale-up application.Further-405more,because the single capacity is improved,the single market 406acceptable cost space is increased,and so as to make it possible 407to improve the intelligence level of equipment.Shortening the pro-408cess cycle and speeding up the feeding process provide access to 409the automation and continuous production.This also cuts the oper-410ation cost and labor cost.411 Energy consumption is most part of the SE equipment operating412 costs.Because catapult explosion realized the heat-work conver-413 sion process,it can achieve the pretreatment purpose in a rela-414 tively short period of steaming time and thus,consume less415 energy.The actual production statistics also show a significant fall416 in the steam consumption.For instance,traditional explosion417 steam consumption is0.8–1ton per ton of pretreated material418 (duration time for15–20min),while catapult explosion is0.25419 ton(duration time for90–120s).It is known that the lower steam420 consumption plays the decisive role to reduce the pretreatment421 cost.422 4.Conclusion423 In conclusion,the merits of SE pretreatment over simple boiling424 should be closely examined and reconsidered for industrial appli-425 cations.Under close scrutiny,it is possible that we may uncover426 an exciting new process for the conversion of biomass.427 Acknowledgements428 Supported by the National Natural Science Foundation of China429 and the Program of Knowledge Innovation Project Granted by Chi-430 nese Academy of Sciences.We thank S.G.Yang,T.B.Ren,and C.H.431 Liu for technical assistance and X.F.Li for improving the English432 language.433 References434 Agbor,V.B.,Cicek,N.,Sparling,R.,Berlin, A.,Levin, D.B.,2011.Biomass435 pretreatment:fundamentals toward application.Biotechnol.Adv.29,675–685.436 Angles,M.N.,Ferrando,F.,Farriol,X.,Salvado,J.,2001.Suitability of steam exploded437 residual softwood for the production of binderless panels.Effect of the pre-438 treatment severity and lignin addition.Biomass Bioenergy21,211–224.439 Avella,R.,Scoditti,E.,1998.The Italian steam explosion program at ENEA.Biomass440 Bioenergy14,289–293.441 Avellar,B.K.,Glasser,W.G.,1998.Steam-assisted biomass fractionation.I.Process442 considerations and economic evaluation.Biomass Bioenergy14,205–218.443 Bender R,1979.Method of treating lignocellulose materials to produce ruminant444 Patent4,136,207.445 Brown DR,1980.Apparatus for conveying particulate Patent4,186,658.446 Brownell,H.H.,Yu, E.K.C.,Saddler,J.N.,1986.Steam-explosion pretreatment of447 wood:effect of chip size,acid,moisture content and pressure drop.Biotechnol.448 Bioeng.28,792–801.449 Bungay,H.R.,1982.Biomass refining.Science218,643–646.450 Chen,H.,Qiu,W.,2010.Key technologies for bioethanol production from451 lignocellulose.Biotechnol.Adv.28,556–562.452 Focher,B.,Marzetti,A.,Beltrame,P.L.,Avella,M.,1998.Steam exploded biomass for453 the preparation of conventional and advanced biopolymer-based materials.454 Biomass Bioenergy14,187–194.455 Glasser,W.G.,Wright,R.S.,1998.Steam-assisted biomass fractionation.II.456 fractionation behavior of various biomass resources.Biomass Bioenergy14,457 219–235.458 Guo,P.et al.,2011.Effects of different pretreatment strategies on corn stalk459 acidogenic fermentation using a microbial consortium.Bioresour.Technol.102,460 7526–7531.461 Hendriks,A.T.W.M.,Zeeman,G.,2009.Pretreatments to enhance the digestibility of462 lignocellulosic biomass.Bioresour.Technol.100,10–18.463 Hooper,R.J.,Li,J.,1996.Summary of the factors critical to the commercial464 application of bioenergy technologies.Biomass Bioenergy11,469–474.465 Horn,S.J.et al.,2011.Biogas production and saccharification of Salix pretreated at466 different steam explosion conditions.Bioresour.Technol.102,7932–7936.467 Kessler,R.W.,Becker,U.,Kohler,R.,Goth,B.,1998.Steam explosion offlax–a468 superior technique for upgradingfibre value.Biomass Bioenergy14,237–249.469 Lipinsky,E.S.,1981.Chemicals from biomass:petrochemical substitution options.470 Science212,1465–1471.471 Liu,S.,2011.Woody biomass:niche position as a source of sustainable renewable472 chemicals and energy and kinetics of hot-water extraction/hydrolysis.473 Biotechnol.Adv.28,563–582.474 Mason WH,1928.Apparatus for and process of explosionfibration of lignocellulose475 Patent1,655,618.476 Ng,T.K.,Busche,R.M.,McDonald,C.C.,Hardy,R.W.F.,1983.Production of feedstock477 chemicals.Science219,733–740.478 Overend,R.P.,Chornet,E.,1987.Fractionation of lignocellulosics by steam-aqueous479 pretreatments.Philos.Trans.R.Soc.London A321,523–536.480 Sanchez,O.J.,Cardona,C.A.,2008.Trends in biotechnological production of fuel481 ethanol from different feedstocks.Bioresour.Technol.99,5270–5295.6Z.Yu et al./Bioresource Technology xxx(2012)xxx–xxxPlease cite this article in press as:Yu,Z.,et al.A real explosion:The requirement of steam explosion pretreatment.Bioresour.Technol.(2012),http:// /10.1016/j.biortech.2012.06.055482Sipos,B.et al.,2010.Steam pretreatment of dry and ensiled industrial hemp for 483ethanol production.Biomass Bioenergy34,1721–1731.484Sun,X.F.et al.,2005.Characteristics of degraded hemicellulosic polymers obtained 485from steam exploded wheat straw.Carbohydr.Polym.60,15–26.486Talebnia,F.,Karakashev,D.,Angelidaki,I.,2010.Production of bioethanol from 487wheat straw:an overview on pretreatment,hydrolysis and fermentation. 488Bioresour.Technol.101,4744–4753.489 Turn,S.Q.,Kinoshita, C.M.,Kaar,W.E.,Ishimura, D.M.,1998.Measurements490 of gas phase carbon in steam explosion of biomass.Bioresour.Technol.64,491 71–75.492 Viola,E.et al.,2008.Ethanol from eel grass via steam explosion and enzymatic493 hydrolysis.Biomass Bioenergy32,613–618.494Z.Yu et al./Bioresource Technology xxx(2012)xxx–xxx7Please cite this article in press as:Yu,Z.,et al.A real explosion:The requirement of steam explosion pretreatment.Bioresour.Technol.(2012),http:// /10.1016/j.biortech.2012.06.055。
济南新航道学校IELTS READING雅思阅读高分必备习题集注:本习题集仅供济南新航道内部学员使用,严禁翻印,传阅。
Contents1.Amateur naturalist 业余自然学家(P3)municating Styles and Conflict 交流的方式与冲突(P6)3.Health in the Wild 野生动物自愈.(p10)4.The Rainmaker 人工造雨(P13)5.Shoemaker-Levy 9 Collision with Jupiter 舒梅克彗星撞木星(P16)6.A second look at twin studies 双胞胎研究(P19)7.Transit of Venus 金星凌日(P22)8.Placebo Effect—The Power of Nothing安慰剂效应(P25)9.The origins of Laughter 笑的起源(P29)10. Rainwater Harvesting 雨水收集(P32)11.Serendipity:The Accidental Scientists科学偶然性(P36)12.Terminated! Dinosaur Era! 恐龙时代的终结(P40) ADDICTION 电视上瘾(P43)14.EI nino and Seabirds 厄尔尼诺和水鸟(P46)15.The extinct grass in Britain 英国灭绝的某种草(P50)cation philosophy教育的哲学(P53)17.The secret of Yawn打哈欠的秘密(P57)18.consecutive and simultaneous translation交替传译和同声传译(P60)19.Numeracy: can animals tell numbers?动物会数数么?(P63)20.Going nowhere fast(P66)21.The seedhunters种子收集者(P69)22.The conquest of Malaria in Italy意大利征服疟疾(P72)READING PASSAGE 1You should spend about 20minutes on Questions 27-40 which are based on Reading Passage 3 below.文章背景:业余自然学家主要讲述的是有一些人,平时喜欢观察自然界的植物生长,养蜂过程,气候变化,等等与大自然相关的变化并且做记录得到一些数据,这种数据叫做“amateur data”. 本文主要介绍业余自然学家以及一些专业自然学家探讨业余自然学家的数据是否能用,以及应该如何使用这些自然学家的数据,其可信度有多少等问题。
音乐特点的作文英语Music has the power to evoke emotions, tell stories, and bring people together. It is a universal language that transcends cultural and linguistic barriers. Here's an essay on the characteristics of music:The Essence of Music: A Journey Through Its CharacteristicsMusic is an art form that has been with humanity since time immemorial. It is a complex tapestry woven from a variety of characteristics that make it a unique and powerful medium of expression. Let's explore some of these defining featuresthat set music apart.1. Melody: The Heart of MusicMelody is the sequence of notes that forms the main tune of a piece of music. It is the most recognizable aspect of a song and often the first thing that captures our attention. Awell-crafted melody can be simple yet haunting, or complex and intricate, but always memorable.2. Harmony: The Soul of MusicHarmony refers to the combination of simultaneous musical notes to produce a pleasing sound. It adds depth and richness to a melody, creating a sense of fullness and resonance. The interplay of harmonies can evoke a range of emotions, fromtranquility to excitement.3. Rhythm: The Pulse of MusicRhythm is the pattern of sound and silence in music. It gives music its pulse and drives the beat. Rhythm can be steady and predictable, or it can be complex and syncopated, creating a sense of tension and release.4. Dynamics: The Emotion of MusicDynamics in music refer to the volume and intensity of the sound. They play a crucial role in conveying the mood and emotion of a piece. From the softest whisper to the loudest roar, dynamics can dramatically alter the impact of a musical composition.5. Texture: The Layering of MusicTexture is the way in which different musical elements are combined. It can range from a single melody line, known as monophonic, to multiple independent melodies, known as polyphonic, or even to a rich tapestry of sounds, known as heterophonic or homophonic.6. Timbre: The Color of MusicTimbre, or tone color, is the quality of a musical note or sound that distinguishes different types of sound production, such as instruments or voices. It is what makes a piano sound different from a violin, or a soprano different from a tenor.7. Form: The Structure of MusicForm is the structure of a piece of music, defining how the various musical elements are organized. It can be as simpleas a verse-chorus structure or as complex as a symphony with multiple movements.8. Cultural Significance: The Context of MusicMusic is deeply rooted in the culture from which it emerges. It reflects the history, values, and experiences of a people. Understanding the cultural context of music can enrich our appreciation and understanding of its meaning.9. Evolution: The Growth of MusicMusic is not static; it evolves over time, influenced by technological advances, social changes, and the creativity of musicians. This evolution keeps music fresh and relevant, allowing it to continually resonate with new generations.10. Personal Connection: The Intimacy of MusicFinally, music has a personal dimension. It can be a source of comfort, inspiration, and joy. Each individual's connection with music is unique, shaped by their own experiences and emotions.In conclusion, the characteristics of music are as diverse as the notes on a stave. They work together to create a rich and multifaceted experience that has the power to move us, challenge us, and bring us together. Understanding these characteristics deepens our appreciation for the art of music and its enduring influence on human culture.。
Chinese Journal of Turbomachinery Vol.64,No.3,2022*基金项目:国家领军人才项目(82211402);工业装备数字孪生国家重点实验室项目(3006-02020000)基于LSTM-WPHM 模型的风机轴承故障报警与寿命预测方法*马明骏1赵海心1姜孝谟2成骁彬1(1.大连理工大学运载工程与力学学部2.大连理工大学能源与动力学院,工业装备结构分析国家重点实验室,工业装备数字孪生辽宁省重点实验室)摘要:受极端恶劣运行环境和复杂多变运行工况的影响,风电机组的轴承存在很高的故障率,进而导致风电机组容易发生非计划停机,减少风机的发电量,增加风场的运维成本。
考虑风机运行的非稳态性和不确定性,提出一种基于SCADA 数据的多变量LSTM 时序预测模型和WPHM 可靠性分析模型相结合的混合方法,实现风电机组轴承故障预警和寿命预测。
该方法通过LSTM 时序预测模型集成多变量对轴承故障的影响,提高了轴承可靠性分析精度,并简化了可靠性模型的参数估计。
同时,以模型预测残差为状态指标,建立WPHM 可靠性分析模型,以累积失效概率为失效指标,实现故障报警和寿命预测的有机结合。
利用某海上风场实际机组的故障信息和监测数据,验证了该方法的准确性和有效性。
关键词:轴承;WPHM;LSTM;故障预警;寿命预测中图分类号:TP18;TH133.3;TM315文章编号:1006-8155-(2022)03-0063-09文献标志码:ADOI:10.16492/j.fjjs.2022.03.0010A Hybrid LSTM-WPHM Method for Fault Alarming andLife Estimation of Wind Turbine BearingsMing-jun Ma 1Hai-xin Zhao 1Xiao-mo Jiang 2Xiao-bin Cheng 1(1.Faculty of Vehicle Engineering and Mechanics,Dalian University of Technology;2.State Key Lab of Structural Analysis for Industrial Equipment,Provincial Key Lab of Digital Twin for Industrial Equipment of Liaoning,School ofEnergy and Power Engineering,Dalian University of Technology )Abstract:Owing to extremely harsh ambient environment and complicated operating conditions,the bearings of wind turbines have a high failure rate,which leads to unplanned shutdown of a wind turbine,thus reducing its generating capacity while increasing its operation and maintenance cost.This paper presents a hybrid intelligent probabilistic method for simultaneous fault warning and remaining life estimation for a wind turbine bearings by adeptly combining multivariate LSTM0引言风电机组通常建造在如近海、草原和山区等偏远地带,受天气变化影响,其运行存在很大的随机性,且工况多变,加之自然环境极端恶劣,导致机组故障频发,运行成本居高不下[1]。
©2005 Emerson Process Management. All rights reserved. View this and other courses online at . SIS 101 - What is Risk?15 minutesIn this course:1 Overview2 What's at RiskRisks3 Identifying4 Inherent Risk5 Assessing Risk6 Tolerable Risk7 SummaryOverviewImplementing a safety instrumented system (SIS) can be a big job. Consideringthe importance of plant safety, it's also a task you want to get right the firsttime. If you understand the basic concepts of plant safety and safetyinstrumented systems before you begin, you'll have a better idea of whereyou're headed and what kinds of questions to ask yourself and others as youproceed.This course introduces perhaps the most basic of those concepts: risk.We'll address the kinds of risks typically considered in plant safetyprograms, as well as how such risks are identified and evaluated.HintAs you go through the topics in this course, pay special attention to the following:The two parts of identifying hazardous risksWhat makes up inherent risksHow risks are quantifiedWho determines tolerable riskWhat's at RiskIn safety standards such as IEC 61511, what's at risk is identified as personnel and the environment. However, most companies use an expanded list of risk categories that can also include: Public safety and healthLiability costsProduction interruptions and quality issuesEquipment damage and repair costsWhat is IEC 61511?IEC 61511 - which you'll learn about in SIS 103 - is an international safety standard issued by the International Electrotechnical Commission. Although the IEC is based in Switzerland, its standards are used in the whole world - not just Europe. The IEC standard that most of us in process automation are familiar with is IEC 61131-3, which describes control system programming languages.Identifying RisksA key step in maintaining or improving safety is to identify the risks that threaten it.As the diagram indicates, identifying risks requires answering both parts of the question, “What’s the likelihood a harmful event will happen, and what are the consequences if it does?”Risk is determined by both the likelihood and the consequences of an event.The challenge is to identify risks in advance so that they can be reduced or eliminated – for example, by changing a product’s formulation or reducing the quantities of hazardous material present.The task of identifying and ranking risk is often done in stages of increasing thoroughness. The following table lists some of the more common techniques.Risk identification techniques When you might use them•Safety Review• Checklist•Preliminary Hazard Analysis• What-If•Abbreviated HAZard and OPerability (HAZOP) study Used in preliminary hazard evaluation studies to provide a general overview of existing risks. (Usually not too time consuming.)• What-If/Checklist•Detailed and complete HAZOP study •Failure Mode and Event Analysis Used to develop a more detailed analysis of potential risks.•Fault Tree Analysis •Event Tree Analysis •Cause Consequence Analysis •Human Reliability Analysis Used in conjunction with quantitative risk analysis to establish a high level of detail about risks. (Usually used only for specific areas or unit operations.)Inherent RiskMost process facilities have many pieces of equipment that each contribute to what’s called inherent risk – in other words, risk that exists because of the nature of the process, including the equipment and materials present.For example, the inherent risks of riding in a car include accidents caused by driver errors, flat tires, or (however unlikely) fuel fires.Evaluating the entire process helps determine the likelihood of an occurrence, and evaluating the materials (type and quantity) helps determine the consequences of the risk.Let’s see how inherent risk applies to a process industry example – a pressurized vessel containing ammonia.(continued on next page)Looking at the entire process reveals several inherent risks that could lead to an ammonia release, including possibilities for:Tank rupture from overpressure and/or seam failureLeaking at pipe joints, valve packing, and/or sensor tapsFailure of the transmitters, control valve, and/or basic process control system (such as a DCS orPLC) to maintain the correct volume and pressure in the tankEach of these risks has a likelihood. The consequences depend largely on the dangers of human exposure to ammonia – including eye and respiratory system irritation.Assessing RiskAssessing risk, though potentially subjective, is usually done using an established corporate risk assessment model developed by competent people – such as engineers, chemists, and lawyers – who are trained to assess and quantify cause, effect, and liability.Developing a risk assessment model for each at-risk category requires establishing a consistent means of describing both an event’s likelihood (frequency) and its consequence (severity). When developing risk assessment models, it’s a good idea to use broad quantifications – such as orders of magnitude rather than exact values – to avoid getting bogged down “splitting hairs” about overly precise levels of risk. Likelihood. The consequences of an event can be severe, but the likelihood of its happening may be low. To ensure that different groups of people within the same company establish approximately the same risk rankings for similar events, the model should include a consistent, quantified measure of event likelihood or frequency. For example, a low likelihood might be defined as less than 1 chance in 10,000 of the event occurring during a year.Sample likelihood risk assessment model: Likelihood Type of eventsLow(e.g., less than 1/10,000 annually) Events such as multiple failures of diverse instruments or valves, multiple human errors in a stress-free environment, or spontaneous failures of process vessels.Medium (e.g., 1/10,000 - 1/1000 annually) Events such as dual instrument or valve failures, or major releases in loading/unloading areas.High (e.g., more than 1/1000 annually) Events such as process leaks, single instrument or valve failures, or human errors that result in small releases of hazardous materials.Adapted from IEC 61511-3, Table C.1 - Frequency of hazardous event likelihoodConsequence. The model should also include a way of evaluating and defining the consequences for each at-risk category. For example, the table below shows one way consequences might be defined in terms of number of injuries or amount of property damage.Sample consequence risk assessment model:Consequences ImpactMinor(e.g., injury ormore than $120,000 of damage or lost production) Minor damage to equipment. No shutdown of the process. Temporary injury to personnel and damage to the environment.Serious (e.g.,hospitalization or more than$250,000 of damage or lostproduction) Damage to equipment. Short shutdown of the process. Serious injury to personnel and the environment.Extensive (e.g., death ormore than $1,000,000 of damage or lost production) Large-scale damage to equipment. Shutdown of a process for a long time. Catastrophic consequence to personnel and/or the environment.Adapted from IEC 61511-3, Table C.2 - Criteria for rating the severity of impact of hazardous events.For the ammonia tank example, the likelihood of an ammonia release is determined by combining the likelihood of risks like those listed in "Inherent Risks." In this case, we've determined that the total risk of leaks is medium (between 1 chance in 1000 and 1 chance in 10,000).The consequences of such an event are determined primarily by the amount of ammonia released and resulting potential for affecting plant personnel and the public. In our example, we've determined that a tank rupture and resulting ammonia release would be considered serious.This relatively simple model is only one example of how risk might be assessed. We'll look at another, more quantitative model in SIS 102.Tolerable RiskWe all know there is a point where risk becomes “intolerably high.” Likewisewe know there’s a point where risk becomes broadly accepted as negligible.Between those two points is the tolerable risk area.Each of us makes decisions about what constitutes tolerable risk inour own lives – for example, deciding to stop or go through a trafficlight that just turned yellow. (Traffic fatality risk is 2 in 1,000 personyears.)In a process plant, workers are often exposed to multiple andsimultaneous risks. The purpose of a plant safety program – including safety instrumented systems – is to ensure this exposure is tolerable at all times.So what are the appropriate numbers for tolerable risk in a plant environment? There is no one “correct” answer; the plant owner/operator must decide the tolerable risk criteria for the plant. IEC 61511 describes tolerable risk as risk which is accepted in a given context based on the current values of society. Most companies include injuries, deaths, and dollars among the factors they consider. “Best estimates” of what constitutes tolerable risk may be based on research results of similar circumstances and events at other sites and industries. Other times, tolerable risks are available from sources such as U.S. Occupational Safety & Health Administration (OSHA), the American Conference of Governmental Industrial Hygienists (ACGIH), the U.S. Environmental Protection Agency (EPA), or similar agencies in other countries.Some references list the highest points of tolerable risk as 1 fatality per 1,000 years of exposure for workers, and 1 fatality per 10,000 years of exposure for the public. Those same references rate 1 fatality per 100,000 years of exposure as negligible risk. However, individual world areas, countries, and companies frequently apply lower acceptable risk numbers.Tolerable risk is determined by consequences as well as likelihood. For the ammonia-tank example, we’ll use locally acceptable levels of human exposure to ammonia. For site workers in the U.S., OSHA says the maximum exposure is an atmospheric concentration of 50 parts per million (ppm) over an 8-hour period.Other limits may also have to be considered. For example, what if there’s a school nearby? For public exposure, the ACGIH allows only 25 ppm of ammonia over the same time period. Numbers like these help determine the necessary risk reduction an SIS must achieve – which we will address in the next course.SummaryIn this course you've learned that:Risks consist of likelihood and consequences.Inherent risks are those present in the complete process, including equipment and materials.Quantifying risk requires using an established risk assessment model.Tolerable risks are the numbers of injuries, deaths, or dollar loss (and their frequency) that we are willing to accept.。
Evaluating the Impact of Simultaneous Multithreading on Network Servers Using Real HardwareY aoping Ruan yruan@Vivek S.Paivivek@Erich Nahum†nahum@John M.Tracey†traceyj@Department of Computer Science,Princeton University,Princeton,NJ08544†IBM T.J.Watson Research Center,Y orktown Heights,NY10598ABSTRACTThis paper examines the performance of simultaneous multithreading (SMT)for network servers using actual hardware,multiple network server applications,and several ing three versions of the Intel Xeon processor with Hyper-Threading,we perform macroscopic analysis as well as microarchitectural measurements to understand the origins of the performance bottlenecks for SMT processors in these environments.The results of our evaluation suggest that the current SMT support in the Xeon is application and workload sensitive,and may not yield significant benefits for network servers.In general,wefind that enabling SMT on real hardware usually produces only slight performance gains,and can sometimes lead to performance loss.In the uniprocessor case,previous studies appear to have neglected the OS overhead in switching from a uniprocessor kernel to an SMT-enabled kernel.The performance loss associated with such support is comparable to the gains provided by SMT.In the2-way multiprocessor case,the higher number of memory ref-erences from SMT often causes the memory system to become the bottleneck,offsetting any processor utilization gains.This effect is compounded by the growing gap between processor speeds and memory latency.In trying to understand the large gains shown by simulation studies,wefind that while the general trends for micro-architectural behavior agree with real hardware,differences in siz-ing assumptions and performance models yield much more opti-mistic benefits for SMT than we observe.Categories and Subject Descriptors:C.4PERFORMANCE OF SYSTEMS:Design studiesGeneral Terms:Measurement,Performance.Keywords:Network Server,Simultaneous Multithreading(SMT).1.INTRODUCTIONSimultaneous multithreading(SMT)has recently moved from simulation-based research to reality with the advent of commer-cially available SMT-capable microprocessors.Simultaneous multi-threading allows processors to handle multiple instruction streams in the pipeline at the same time,allowing higher functional unit uti-Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.SIGMETRICS’05,June6–10,2005,Banff,Alberta,Canada.Copyright2005ACM1-59593-022-1/05/0006...$5.00.lization than is possible from a single stream.Since the hardware support for this extra parallelism seems to be minimal,SMT has the potential to increase system throughput without significantly affecting system cost.While academic research on SMT proces-sors has been taking place since the mid-1990’s[8,37],the recent availability of SMT-capable Intel Xeon processors allows perfor-mance analysts to perform direct measurements of SMT benefits under a wide range of workloads.One of the biggest opportunities for SMT is in network servers, such as Web,FTP,orfile servers,where tasks are naturally paral-lel,and where high throughput is important.While much of the academic focus on SMT has been on scientific or computation-intensive workloads,suitable for the High Performance Computing (HPC)community,a few simulation studies have explicitly exam-ined Web server performance[18,26].The difficulty of simulat-ing server workloads versus HPC workloads is in accurately han-dling operating system(OS)behavior,including device drivers and hardware-generated interrupts.While processor-evaluation work-loads like SPEC CPU[33]explicitly attempt to avoid much OS in-teraction,server workloads,like SPECweb[34]often include much OS,filesystem,and network activity..While simulations clearly provide moreflexibility than actual hardware,evaluation on real hardware also has its advantages,in-cluding more realism and faster ing actual hard-ware,researchers can run a wider range of workloads(e.g.,bottom-half heavy workloads)than is feasible in simulation-based environ-ments.Particularly for workloads with large data set sizes that are slow to reach steady state,the time difference between simulation and evaluation can be substantial.The drawback of hardware,how-ever,is the lack of configuration options that is available in simula-tion.Someflexibility in the hardware analysis can be gained by us-ing processors with different characteristics,though this approach is clearly much more constrained than simulators.This paper makes four contributions:•We provide a thorough experimental evaluation of SMT for network servers,usingfive different software packages and three hardware platforms.We believe this study is more com-plete than any related work previously published.•We show that SMT has a smaller performance benefit than expected for network servers,both in the uniprocessor and dual-processor cases.In each case,we identify the macro-level issues that affect performance.•We perform a microarchitectural evaluation of performance using the Xeon’s hardware performance counters.The re-sults provide insight into the instruction-level issues that af-fect performance on these platforms.•We compare our measurements with earlier simulation re-sults to understand what aspects of the simulated processorsyielded much larger performance gains.We discuss the fea-sibility of these simulation models,both in the context of current hardware,and with respect to expected future trends. Our evaluation suggests that the current SMT support is sensitive to application and workloads,and may not yield significant bene-fits for network servers,especially for OS-heavy workloads.We find that enabling SMT usually produces only slight performance gains,and can sometimes lead to performance loss.In the unipro-cessor case,simulations appear to have neglected the OS overhead in switching from a uniprocessor kernel to an SMT-enabled kernel. The performance loss associated with such support is comparable to the gains provided by SMT.In the2-way multiprocessor case, the higher number of memory references from SMT often causes the memory system to become the bottleneck,offsetting any pro-cessor utilization gains.This effect is compounded by the growing gap between processor speeds and memory latency.Wefind that SMT on the Xeon tends to provide better gains when coupled with large L3caches.By comparing performance gains across variants of the Xeon,we argue that such caches will only become more cru-cial for SMT as clock rates increase.If these caches continue to be one of the differentiating factors between commodity and higher-cost processors,then commodity SMT will see eroding gains going forward.We believe this observation also applies to architectures other than the Xeon,since SMT only yields benefits when it is able to utilize more processor resources.Using these results,we can also examine how simulation sug-gested a much more optimistic scenario for SMT,and why it differs from what we observe.For example,when calculating speedups, none of the simulations used a uniprocessor kernel when measur-ing the non-SMT base case.Furthermore,the simulations use cache sizes that are larger than anything commonly available today.These large caches appear to have supported the higher number of threads used,yielding much higher benefits than what we have seen,even when comparing with the same number of threads.We do not be-lieve that the processor models used in the simulation are simply more aggressive than what is available today or likely to be avail-able in the near-future.Instead,using comparable measurements from the simulations and existing hardware,we show that the type of processors commonly modeled in the simulations are unlikely to ever appear as slightly-modified mainstream processors.We argue that they have characteristics that suggest they could be built specif-ically for SMT,and would sacrifice single-thread performance. The rest of this paper is organized as follows:we provide some background on SMT,the Xeon,and our experimental setup in Sec-tion2.We measure SMT’s effect on throughput and perform a microarchitectural analysis in Sections3and4.In Section5we compare our measurement results to previous simulation studies. The impact of other workloads is discussed in Section6.Section7 discusses related work,and we conclude in Section8.2.BACKGROUNDIn this section we present an overview of the Intel Xeon proces-sor with Hyper-Threading(Intel’s term for SMT),then describe our experimental platform including hardware parameters and server configuration,our workloads and measurement methodology. 2.1SMT ArchitectureThe SMT architecture was proposed in the mid-1990’s,and has been an active area for academic research since that time[16,36, 37],but thefirst general-purpose processor with SMT features was not shipped until2003.The main intent of SMT is to convert thread-level parallelism into instruction-level parallelism.In SMT-clock ratepipelineFetchround robin for logical processors3µops per cyclecaches,branch predictors,decoder logic ResourcesDuplicatedITLB,renaming logicµop queue,re-ordering buffer ResourcesLevel Associa-Latencytivity(cycles)12Kµops6µopsD-L14way2512KB128bytesMemory N/A225-344DTLB64entries,20cycles miss penaltyTable2:Intel Xeon memory hierarchy information.The latency cy-cles of each level of the memory hierarchy includes the cache miss time of the previous levelabsolute time.The absolute latency is relatively constant since the FSB speed is the same.The impact on bandwidth is22%,much less than the clock speed difference–the2.0GHz system has a read bandwidth of1.8GB/sec while the3.06GHz system has a value of2.2GB/sec.While higher bandwidth is useful for copy-intensive applications,the memory latency is more important to applications that perform heavy pointer-chasing.Early Web servers performed significant numbers of memory copies to transfer data, but with the introduction of zero-copy[22]support into servers, copy bandwidth is less of an issue.Our testing harness consists of12uniprocessor client machines with AMD Duron processors at1.6GHz.The aggregate processor power of the clients are enough to ensure that the clients are never the bottleneck.To ensure adequate network bandwidth,the clients are partitioned into four groups of three machines.Each group is connected to the server via a separate switched Gigabit Ethernet, using four Intel e1000MT server adapters at the server.We comparefive different OS/processor configurations,based on whether a uniprocessor or multiprocessor kernel is used,and whether SMT is enabled or ing the BIOS support and OS boot parameters,we can select between one or two proces-sors,and enable or disable SMT.For most of our tests,we use a multiprocessor-enabled(SMP)kernel,since the OS sees an SMT-enabled processor as two logical processors.However,when we run with one physical processor and SMT disabled,we also test on a uniprocessor kernel.These combinations yield thefive configura-tions studied in this paper:one processor with uniprocessor kernel (1T-UP),one processor with SMP kernel(1T-SMP),one proces-sor with SMP kernel and SMT enabled(2T),two processors(2P), and two processors with SMT enabled(4T).Key features of the five configuration and their names used in this paper are shown in Table3.The operating system on the server is Linux,with ker-nel version2.6.8.1.This version includes optimizations for SMT, which we enable.The optimizations are described next.2.3Kernel Versions and OverheadsIn evaluating SMT performance on uniprocessors,it is important to understand the distinction between the types of kernels avail-able,because they affect the delivered performance.Uniprocessor kernels,as the name implies,are configured to only support one processor,regardless of how many physical processors are in the system.Multiprocessor kernels are configured to take advantage of all processors in the system using a single binary image.While in-tended for multiple processors,they are designed to operate without problems on a single processor.Uniprocessor kernels can make assumptions about what is pos-sible during execution,since all sources of activity are taking place on one processor.Specifically,the OS can make two important as-1T-SMP2P #CPUs12 SMP kernel Yes Yes SMT enabled No NoTUX[38],and Haboob[40].Each server has one or more distin-guishing features which increases the range of systems we study. All of the servers are written in C,except Haboob,which uses Java. TUX is in-kernel,while all of the others are user-space.Flash and Haboob are event-driven,but Haboob also uses threads to isolate different steps of request processing.We run Apache in two config-urations–with multiple-processes(dubbed Apache-MP),and mul-tiple threads(dubbed Apache-MT)using Linux kernel threads,be-cause the Linux2.6kernel has better support for threads than the 2.4series,and the Xeon has different cache sharing for threaded applications.Threaded applications share the same address space register while multi-process applications usually have different reg-isters.Flash has a main process handling most of the work with helpers for disk IO access.We run the same number of Flash main processes as the number of hardware contexts.TUX uses a thread-pool model,where multiple threads handle ready events.With the exception of Haboob,all of the servers use the zero-copy interfaces available on Linux,reducing memory copy overhead when send-ing largefiles.For all of the servers,we take steps described in the literature to optimize their performance.While performance comparison among the servers is not the focus of this paper,we are interested in examining performance characteristics of SMT on these different software styles.We use the SPECweb96[34]benchmark mostly because it was used in previous simulation pared to its successor,the SPECweb99benchmark,it spends more time in the kernel because all requests are static,which resembles other server workloads such as FTP andfile servers.We also include SPECweb99benchmark results for comparison.SPECweb is intended to measure a self-scaling capacity metric,which means that the workload character-istics change in several dimensions for different load levels.To simplify this benchmark while retaining many of its desirable properties,we use a more tractable subset when measuring band-widths.In particular,wefix the data set size of the workload to 500MB,whichfits in the physical memory of our machine.We perform measurements only after an initial warm-up phase,to en-sure that all necessaryfiles have been loaded into memory.During the bandwidth tests,no disk activity is expected to occur.We dis-able logging,which causes significant performance losses in some servers.SPECweb99measures the number of simultaneous con-nections each server is able to sustain while providing the speci-fied quality of service to each connection.The SPECweb99client software introduces latency between requests to decrease the per-connection bandwidth.SPECweb96does not have this latency,al-lowing all clients to issue requests in a closed loop,infinite-demand model.We use1024simultaneous connections,and report the ag-gregate response bandwidth received by the clients.We use a modified version of OProfile[20]to measure the uti-lization of microarchitectural resources via the Xeon’s performance-monitoring events.OProfile ships with the Linux kernel and is able to report user,kernel or aggregated event values.OProfile operates similarly to DCPI[1],using interrupt-based statistical sampling of event counters to determine processor activity without much over-head.Wefind that for our experiments,the measurement overhead is generally less than1%.While OProfile supports many event counts available on the Xeon,we enhance the released code to sup-port several new events,such as L1data cache miss,DTLB miss, memory loads,memory stores,resource stalls,etc.3.SMT PERFORMANCEIn this section we evaluate the throughput improvement of SMT in both uniprocessor and multiprocessor systems.Particular atten-tion is given to the comparison between configurations with and without SMT enabled,and kernels with and without multiproces-sor support.Wefirst analyze trends at a macroscopic level,and then use microarchitectural information to understand what is causing the macroscopic behavior.Our bandwidth result for the basic3.06 GHz Xeon,showingfive servers andfive OS/processor configura-tions,can be seen in Figure2.Results for2.0GHz and3.06GHz with L3cache are seen in Figures1and3,respectively.For each server,thefive bars indicate the maximum throughput achieved us-ing the specified number of processors and OS configuration. While bandwidth is influenced by both the server software as well as the OS/processor configuration,the server software usually has a large effect(and in this case,dominant effect)on bandwidth. Heavily-optimized servers like Flash and TUX are expected to out-perform Apache,which is designed forflexibility and portability instead of raw performance.The relative performance of Apache, Flash,and Haboob is in-line with previous studies[28].TUX’s relative performance is somewhat surprising,since we assumed an in-kernel server would beat all other options.To ensure it was be-ing run correctly,we consulted with its author to ensure that it was properly configured for maximum performance.We surmise that its performance is due to its emphasis on dynamic content,which is not exercised in this portion of our testing.Haboob’s low per-formance can be attributed both to its use of Java as well as its lack of support for Linux’s sendfile system call(and as a result, TCP checksum offload).For in-memory workloads,the CPU is at full utilization,so the extra copying,checksumming,and language-related overheads consume processor cycles that could otherwise be spent processing other requests.3.1SMP Overhead on UniprocessorWe can quantify the overhead of supporting an SMP-capable ker-nel by comparing the1T-UP(one processor,uniprocessor kernel) value with the1T-SMP(one processor,SMP kernel)value.The loss from uniprocessor kernel to SMP kernel on the base3.06GHz processor is10%for Apache,and13%for Flash and Tux.The losses on the L3-equipped processor and the2.0GHz processor are14%for Apache and18%for Flash and Tux,which are a little higher than our base system.The impact on Haboob is relatively low(4%-10%),because it performs the most non-kernel work.The magnitude of the overhead is fairly large,even though Linux has a reputation of being efficient for low-degree SMP configurations. This result suggests that,for uniprocessors,the performance gained from selecting the uniprocessor kernel instead of SMP kernel can be significant for these applications.The fact that the impacts are larger for both the slowest processor and the processor with L3are also interesting.However,if we con-sider these results in context,it can be explained.The extra over-heads of SMP are not only the extra instructions,but also the extra uncacheable data reads and writes for the locks.The fastest system gets its performance boost from its L3cache,which makes the main memory seem closer to the processor.However,the L3provides no benefit for synchronization traffic,so the performance loss is more pronounced.For the slowest processor,the extra instructions are an issue when the processor is running at only two-thirds the speed of the others.3.2Uniprocessor SMT BenefitsUnderstanding the benefits of SMT for uniprocessors is a little more complicated,because it must be compared against a base case.If we compare1T-SMP to2T(uniprocessor SMT),the re-sulting graphs would appear to make a great case for SMT,with speedups in the25%-35%range for Apache,Flash and TUX,as shown in Figure4.However,if we compare the2T performanceFigure 1:Throughput of Xeon 2.0GHz pro-cessor without L3cacheFigure 2:Throughput of base Xeon 3.06GHzprocessorFigure 3:Throughput of Xeon 3.06GHz pro-cessor with 1MB L3cacheFigure 4:SMT speedup on uniprocessor sys-tem with SMP kernel(1)(NonHaltedCycles∗BusSpeed)The bus utilization values,broken down by server software,con-figuration,and processor type,are shown in Figure7.Severalfirst-order trends are visible:bus utilization tends to increase as the num-ber of contexts/processors is increased,is comparable for all servers except Haboob,and is only slightly lower for L3-equipped pro-cessors.The trends can be explained using the observations from the bandwidth study,and provide strong evidence for our analysis about what causes bottlenecks.The increased bus utilization for a given processor type as the number of processors and hardware contexts increase is not sur-prising,and is similar in pattern to the throughput behavior.Essen-tially,if the system is work-conserving,we expect bus utilization to be correlated with the throughput level.In fact,we see this pattern for the gain from the2.0GHz processor to3.06GHz–the coeffi-cient of correlation between the throughput and the bus utilization is0.95.The coefficient for the L3-equipped versus base3.06GHz Xeon is only0.62,which is still high,and provides evidence that the L3cache is definitely affecting the memory traffic.A more complete explanation of the L3results are provided below.The fact that Haboob’s bus utilization looks different from others is explained by its lack of zero-copy support,and in turn explains its relatively odd behavior in Figures5and6.The bulk data copy-ing that occurs duringfile transfers will increase the bus utilization for Haboob,since the processor is involved in copying buffers and performing TCP checksums.However,the absolute utilization val-ues mask a much larger difference–while Haboob’s bus utilization is roughly50%higher than that of Flash or TUX,its throughput is one-half to one-third the value achieved by those bin-ing thosefigures,we see that Haboob has a per-request bus utiliza-tion that is three to four times higher than the other servers.The same explanation applies to the bus utilization for the L3-equipped processors,and to Apache’s relative gain from SMT.The L3cache absorbs memory traffic,reducing bus utilization,but for Flash and TUX,the L3numbers are only slightly below the non-L3 numbers.However,the absolute throughput for the L3-equipped processors are as much as50%higher,indicating that the per-request bus utilization has actually dropped.The differences in bus utilization then provide some insight into what is happening.For Flash and TUX,the L3bus utilizations are very similar to the non-L3values,suggesting that the request throughput increases until the memory system again becomes the bottleneck.For Apache,the L3utilization is lower than the non-L3,suggesting that while the memory system is a bottleneck without the L3cache,somethingFigure10:L1data cache miss rate Figure8:Cycles per micro-op(CPµ)Figure9:L1instruction cache(Trace Cache)miss rateFigure12:Instruction TLB miss rate Figure13:Data TLB miss rate Figure11:L2cache miss rate,including bothinstruction and dataApache-MT TuxµPB13.0 6.0IPB7.1 3.4Figure 14:Branch mispredictionrate Figure 15:Trace delivery enginestalls Figure 16:Stalls due to lack of storebuffersFigure 17:#of pipeline clears perbyteFigure 18:#of aliasing conflicts per bytesharing.In comparing Apache-MT to Apache-MP,we do see some reduction in the 4T L1miss rate,but the miss rate is still higher than the 2P cases.Thus,while the multithreaded code helps reduce the pressure,the SMT ICache pressure is still significant.The L2miss rate drops in all cases when SMT is enabled,indicating that the two contexts are reinforcing each other.The relatively high L2miss rate for TUX is due to its lower L1ICache miss rate –in absolute terms,TUX has a lower number of L2accesses.The interactions on CPI are complex –the improved L2miss rates can reduce the impact of main memory,but the much worse L1miss rates can inflate the impact of L2access times.We show the breakdowns later when calculating overall CPI values.•TLB misses.In the current Xeon processor,the Instruction Translation Lookaside Buffer (ITLB)is duplicated and the shared DTLB is tagged with each logical processor’s ID.Enabling SMT drops the ITLB miss rate (shown in Figure 12)while increasing the DTLB miss rate (shown in Figure 13).The DTLB miss rate is expected,since the threads may be operating in different regions of the code.We believe the drop in ITLB stems from the interrupt handling code executing only on the first logical processor,effec-tively halving its ITLB footprint.•Mispredicted branches.Branches comprise 15%-17%of in-structions in our applications.Each mispredicted branch has a 20cycle penalty.Even though all of the five servers show 50%higher misprediction rates with SMT,the overall cost is not significant compared to cache misses,as we show in the breakdowns later.•Instruction delivery stalls.The cache misses and mispredicted branches result in instruction delivery stalls.This event measures the number of cycles halted when there are no instructions ready to issue.Figure 15shows the average cycles stalled for each byte delivered.For each server,we observe a steady increase from 1T-UP to 4T,suggesting that with more hardware contexts,the number of cycles spent stalled increases.•Resource Stalls.While the value of instruction delivery stalls measures performance in the front-end of the pipeline,stalls may also occur during pipeline execution stages.This event measures the occurrence of stalls in the allocator caused by store buffer re-strictions.In the Xeon,buffers between major pipeline stages are partitioned when SMT is enabled.Figure 16shows cycles stalled per byte due to lack of the store buffer in the allocator.Enabling SMT exhibits a doubling of the number of stall cycles for each byte transfered.Unfortunately,stalls due to other buffer conflicts,such as the renaming buffer,are not available on existing performance-monitoring counters.We expect similar pressure is also seen in other buffers.•Pipeline clears.Due to the Xeon’s design,there are conditions in which all non-retiring stages of the pipeline need to be cancelled.This event measures the number of these flushes.When this hap-pens,all of the execution resources are idle while the clear occurs.Figure 17shows the average number of pipeline clears per byte of content.The SMT rate is a factor of 4higher,suggesting that pipeline clears caused by one thread can affect other threads exe-cuting simultaneously.Profiling on this event indicates that more than 70%are caused by interrupts.Haboob’s high clear rate in 4T mode may be responsible for some of its performance degradation.•64K aliasing conflicts.This event occurs when the address of a load or store conflicts with another reference which is in progress.When this happens,the second reference cannot begin until the first one is evicted from the cache.This type of conflict exists in the first-level cache and may incur significant penalties for loads that alias to preceding stores.The number of conflicts per byte is shown in Figure 18.All of the servers show fairly high number of con-flicts,suggesting an effective direction for further optimization.•Putting cycles together.We estimate the aggregated cycles per instruction of these negative events and compare them to the mea-sured CPI.While it is possible to estimate the penalty of each event,。