On the Measurement of Affective Variables in Second Language Learning
- 格式:pdf
- 大小:1.29 MB
- 文档页数:38
The effect of internal control de ficiencies on the usefulness of earnings in executive compensation ☆Kareen E.Brown a ,1,Jee-Hae Lim b ,⁎a School of Accounting and Finance,University of Waterloo,200University Ave W (HH 289D),Waterloo,ON,Canada N2L 3G1bSchool of Accounting and Finance,University of Waterloo,200University Ave W (HH 289G),Waterloo,ON,Canada N2L 3G1a b s t r a c ta r t i c l e i n f o Keywords:Sarbanes –Oxley ActInternal control material weaknesses (ICMW)Executive compensation EarningsSince SOX 404disclosures are informative about earnings,and due to the widespread practice of using earnings-based measures in executive compensation,this study examines whether reports of internal control material weaknesses (ICMW)under SOX 404in fluence firms'reliance on earnings in tying executive pay to ing 391(366)firm-year observations with reported ICMW and 3648(3138)firm-year obser-vations for CEOs (CFOs)reporting NOMW under SOX 404,we find a decreased strength in the association be-tween earnings and executives'(CEO and CFO)compensation when the firm reports an ICMW,and as the number of reported ICMW increases.In addition,we find this decreased weight on earnings for the more se-vere Company-Level than Account-Speci fic material weaknesses.Our study suggests that the ICMW report under SOX 404provides incremental information for executive compensation beyond that contained in reported earnings.©2012Elsevier Ltd.All rights reserved.1.IntroductionThe accounting scandals at firms such as Enron and WorldCom highlighted de ficiencies in corporate governance that were charac-terised by low financial reporting quality and disproportionate pay-for-performance.2To discipline firms and restore investor con fidence,legislative authorities enacted the Sarbanes –Oxley Act.Among the re-forms is Section 404of SOX (SOX 404)which requires both the manage-ment and the external auditor to report on the adequacy of a firm's internal control over financial reporting.Prior research shows that,relative to non-disclosing firms,firms reporting material weaknesses in internal control (ICMW)have inferior accruals and earnings quality (Ashbaugh-Skaife,Collins,Kinney,&LaFond,2008;Bedard,2006;Doyle,Ge,&McVay,2007b ),and lower earnings –returns coef ficients (Chan,Farrell,&Lee,2008).Given the fact that SOX 404disclosures are informative about earnings,and due to the widespread practice of using earnings-based measures in executive compensation,this studyexamines whether reports of ICMW under SOX404in fluence firms're-liance on earnings in tying executive pay to performance.A long line of research shows that earnings-based performance measures are commonly used to motivate and reward executives be-cause such measures correspond to manager actions (Gjesdal,1981).However,there are two drawbacks to using earnings to evaluate ex-ecutive performance.First,because executives know how their ac-tions impact earnings,they can manipulate this measure to increase their wealth.Second,earnings do not fully re flect the long-term im-plications of recent executive decisions.Based on these factors,firms place varying weights on earnings in compensating their exec-utives,and the weights are determined by how sensitive earnings are to effort and on the precision ,or lack of noise,with which it re-flects executives'actions (Banker &Datar,1989;Lambert &Larcker,1987).However,there is evidence that CEOs are shielded from certain negative events,such as firm restructuring (Dechow,Huson,&Sloan,1994),or above the line losses (Gaver &Gaver,1998).Our main focus of inquiry is signi ficant because it builds on this line of research by showing that the sensitivity of compensation –performance relations varies cross-sectionally with the quality of the system producing the earnings information.Weak internal controls potentially permit accounting errors to occur and go undetected,increasing unintentional errors in accrual estimation and/or facilitating intentional earnings management (Doyle et al.,2007b ).A report of an internal control de ficiency,there-fore,signals that the manager is unable to provide reasonable assur-ance regarding the quality of reported earnings (Ashbaugh-Skaife et al.,2008;Bedard,2006;Chan et al.,2008;Doyle et al.,2007b ).Using the sensitivity-precision framework,the errors introduced byAdvances in Accounting,incorporating Advances in International Accounting 28(2012)75–87☆Data availability:Data used in this paper are publicly available and also can be requested from the authors.⁎Corresponding author.Tel.:+15198884567x35702;fax:+15198887562.E-mail addresses:kebrown@uwaterloo.ca (K.E.Brown),jh2lim@uwaterloo.ca (J.-H.Lim).1Tel.:+15198884567x35776;fax:+15198887562.2For example Hall and Liebman (1998)report a 209%increase in CEO mean salary in large US firms from 1980to 1994,and Bebchuk and Grinstein (2005)document a 146%increase in CEO pay from 1993to 2003in S&P 500firms.Bebchuk and Fried (2004)ad-vance the theory that soaring executive pay is the result of managementpower.0882-6110/$–see front matter ©2012Elsevier Ltd.All rights reserved.doi:10.1016/j.adiac.2012.02.006Contents lists available at SciVerse ScienceDirectAdvances in Accounting,incorporating Advances inInternational Accountingj o u r na l h o me p a g e :ww w.e l s e v i e r.c o m /l oc a t e /a di a cweak internal controls are likely to result in earnings that capture ex-ecutives'effort with low precision,diminishing its use as an assess-ment tool for evaluating managers'performance.Motivated by the optimal contracting hypothesis,we posit thatfirms with ICMW report earnings with lower precision-sensitivity.The purpose of this study is to examine whetherfirms reporting ICMW place relatively less weight on earnings compared tofirms reporting no ICMW(NOMW).In other words,whether ICMW reports influence compensation contracts is an empirical issue because,al-though ICMWfirms report lower earnings–returns coefficients (Chan et al.,2008),prior research does not suggest any direct associ-ation between the valuation role of earnings and its usefulness in compensating executives(Bushman,Engel,&Smith,2006).Under the null hypothesis,the sensitivity of executive compensation to earnings is unaffected by internal control deficiency.Consistent with prior research,we also examine the weight placed on earnings forfirms reporting two different types of ICMW:Account-Specific and Company-Level weaknesses(Doyle,Ge,&McVay,2007a, b,Doyle et al.,2007b).Account-Specific(AS)material weaknesses arise from routinefirm operations and may be resolved by additional substantive auditing procedures.When AS material weaknesses are identified,executives or auditors can easily audit around them by per-forming additional substantive pany-Level(CL)mate-rial weaknesses,on the other hand,are less easily resolved by auditor involvement and result from lack of resources or inexperience in main-taining an effective control system.Due to the pervasiveness of CL ma-terial weaknesses,the scope of audit efforts needs to be frequently expanded to deal with these more serious concerns regarding the reli-ability offinancial statements(Moody's,2006;PCAOB2004).The extent to which auditors are able to mitigate the negative effect on earnings of these two types of weaknesses would suggest less noise/greater preci-sion in earnings from Account-Specific relative to Company-Level weaknesses.The impact of precision times sensitivity on the weight of earnings for Account-Specific pany-Level material weaknesses is therefore the second empirical question.Using391(366)firm-year observations with reported ICMW and 3648(3138)firm-year observations for CEOs(CFOs)reporting NOMW under Section404of SOX,wefind a decreased strength in the associa-tion between earnings and CEO compensation when thefirm reports an ICMW,and as the number of reported ICMW increases.Our results are also robust to controls for variousfirm characteristics that prior studies have found to influence the role of earnings in compensation contracts,including earnings quality proxies such as earnings persis-tence(Baber,Kang,&Kumar,1998)and corporate governance charac-teristics(Chhaochharia&Grinstein,2009).In addition,for CL material weaknesses,wefind evidence of a lower strength in the earnings–compensation relation for the CEOs.Wefind no such result with AS material weaknesses suggesting that only CL weaknesses affect the weight placed on earnings in compensating CEOs.This study makes two contributions.First,it contributes to existing literature by making an examination of the role of earnings as a per-formance measure in executive compensation contracts(Bushman et al.,2006;Sloan,1993;among others),and by examining how infor-mation on the quality of afirm's internal controls influences the earn-ings–compensation relation.We confirm that weak internal controls result in a diminished role for accounting measures in the CEO com-pensation relation,consistent with optimal contracting.Specifically, it is thefirms with CL weaknesses that reduce the weight on earnings in CEO cash compensation.Overall,ourfindings suggest that the in-formation in the ICMW report is incremental to,or more timely than,that provided by discretionary accruals or earnings persistence measures.Second,our study extends a growing body of literature on the rela-tion between executive compensation and ICMW in the post-SOX era. Carter,Lynch,and Zechman(2009)show that the implementation of SOX in2002led to a decrease in earnings management,and thatfirms responded by placing more weight on earnings in bonus contracts for CEOs and CFOs in the post-SOX period.Another study by Hoitash, Hoitash,and Johnstone(2009)suggests that the compensation of the CFO,who has primary responsibility for the quality of thefirm's internal controls,is penalized for reports of ICMW.Since prior evidence shows, and stresses the importance of,a performance-based compensation penalty for internal control quality as a non-financial performance mea-sure in the evaluation of executives,our study further investigates whether an ICMW impacts the weight of earnings in compensation con-tracts under the mandate of SOX404.We show thatfirms consider the strength of an earnings generation system and specifically choose to re-duce emphasis on earnings-based performance measures in determin-ing CEOs'cash compensation.The next section of this paper provides background information on the internal control disclosure practices required by the Sar-banes–Oxley Act,discusses the usefulness of earnings as a perfor-mance measure and further develops our hypotheses.The third section describes our sample and research design.The fourth section presents our descriptive statistics,results and sensitivity analyses. Thefifth section concludes the paper.2.Prior research and hypothesisSection404of SOX is one of the most visible and tangible changes tofirms'internal control systems in recent times[(Public Company Accounting Oversight Board(PCAOB)(PCAOB),2004)].3The pivotal requirement of Section404is that management assess the effective-ness of thefirm's internal controls overfinancial reporting and in-clude this information in thefirm's annualfinancial statements.This regulation increases scrutinization by thefirm's auditors because the manager assessments must then be separately attested to by the auditor.One of the benefits of the disclosures under Section404is that internal control information is now readily available and may be informative as a non-financial measure of executive performance (Hoitash et al.,2009).Numerous studies have examined the determinants and conse-quences of ICMW.Early studies document an association between ICMW andfirm characteristics,such as business complexity,organi-zational change,firm size,firm profitability and investment of re-sources in accounting controls(Ashbaugh-Skaife,Collins,&Kinney, 2007;Doyle et al.,2007a;Ge&McVay,2005).The implementation of SOX Section404has resulted in higher audit fees(Hoitash, Hoitash,&Bedard,2008;Raghunandan&Rama,2006),longer audit delays(Ettredge,Li,&Sun,2006)and improved audit committee quality(Krishnan,2005).Several studiesfind negative and significant cumulative abnormal returns(Beneish,Billings,&Hodder,2008;De Franco,Guan,&Lu,2005)and lower quality of earnings(Ashbaugh-Skaife et al.,2008;Chan et al.,2008)after SOX404disclosures.Closely related to our study is the literature that examines the as-sociation between earnings quality and ICMW.Chan et al.(2008) document a greater use of positive and absolute discretionary ac-cruals forfirms reporting ICMW than forfirms receiving a favourable report.Ashbaugh-Skaife et al.(2008)alsofind thatfirms reporting ICMW after the inception of SOX have lower quality accruals and sig-nificantly larger positive and negative abnormal accruals,relative to controlfirms.Both Ashbaugh-Skaife et al.(2008)and Bedard(2006)find evidence of improvements in earnings quality after the remedia-tion of ICMW under Section404,whereas Doyle et al.(2007b)claim lower-quality earnings under Section302,but not Section404.3SOX404sets separation implementation dates for“acceleratedfilers”(primarily largefirms),for“non-acceleratedfilers”(smallerfirms),and for foreignfirms.Specifi-cally,Section404rules required acceleratedfilers to comply beginning in2004,where-as compliance for non-acceleratedfilers and foreignfirms began in phases starting in 2006and ending in2009,at which time those two groups reach full compliance.76K.E.Brown,J.-H.Lim/Advances in Accounting,incorporating Advances in International Accounting28(2012)75–87A recent study suggests thatfirms place greater weight on earn-ings in determining incentive pay after the passage of SOX,and other concurrent reforms,because the more stringent reporting envi-ronment in the post-SOX period of2002results in less earnings man-agement(Carter et al.,2009;Hoitash et al.,2009).Carter et al.(2009) report an increase in the weight placed on earnings changes as a de-terminant of executive compensation,and a decrease in the propor-tion of compensation via salary after SOX,that is larger for CEOs and CFOs than it is for other executives.In addition,Hoitash et al. (2009)claim that changes in CFO total compensation,bonus compen-sation and equity compensation are negatively associated with dis-closures of ICMW,suggesting a performance-based compensation penalty for poor internal controls in the evaluation of CFOs.However, the empirical literature has not yet addressed whether disclosures of ICMW in the post-SOX era influence the importance of earnings in de-termining executive pay.3.The role of earnings in executive compensation contractsPrior research has identified accounting earnings and stock returns as the two implicitfirm performance indicators commonly used to determine executive compensation.Accounting earnings are useful for determining executive compensation because they shield managers from market-wide variations infirm value that are beyond executives'control(Sloan,1993).Stock returns are useful because they anticipate future cashflows and reflect the long-term economic consequences of managers'actions(Sloan,1993).As a result,stock returns capture those facets of executive effort that are missing in earnings but are associated with compensation(Clinch,1991; Lambert&Larcker,1987).The usefulness of thefirm performance measures,such as earnings or returns in executive contracts,is deter-mined by its precision and sensitivity(Banker&Datar,1989),and the optimal weight on a performance measure increases as the precision times sensitivity(or the signal-to-noise ratio)increases.Sensitivity re-fers to the responsiveness of the measure to actions taken by the manager,and precision reflects the noise or variance of the perfor-mance measure conditional on the manager's actions.Consistent with prior studies,we model executive compensation as a function of both accounting earnings and returns.Based on this model specification,the weight on earnings as a performance measure,there-fore,is a function of its precision and sensitivity,relative to stock returns,in providing information about the efforts of managers.4.The impact of ICMW on the earnings–compensation relationIn this study we argue that,for several reasons,an ICMW report indicates that reported earnings capture executives'effort with less precision and is less sensitive to managerial effort thanfirms not reporting an ICMW.First,managers are potentially more likely to use accruals to intentionally bias earnings if internal controls are weak.A more effective internal control system allows less managerial discretion in the accrual process(Ashbaugh-Skaife et al.,2008;Doyle et al.,2007b),and thus reduces the ability of the management to ma-nipulate accruals for the purpose of increasing their compensation.Second,weak internal controls potentially permit accounting er-rors to occur and go undetected(Doyle et al.,2007b).An ICMW can impair the sensitivity of the earnings measure for executive compen-sation because the earnings offirms with ICMW may reflect delayed or untimely information(Chan et al.,2008).Further,the noise regard-ing managers'performance in reported earnings due to deficient in-ternal controls are likely to be unpredictable and may not be reflected in the properties of previously reported earnings numbers. For these reasons,the precision times sensitivity of earnings with regards to the manager's actions forfirms reporting ICMW is pre-dicted to be lower than that offirms not reporting a material weak-ness.If ICMW reports provide information about the precision-sensitivity relation of earnings,then they have the potential to impact the use of earnings in designing executive compensation.It is possible that the ICMW report provides no new information to compensation committees or that the committees fully adjust for earnings characteristics in designing executive compensation con-tracts.In such case,we wouldfind no association between executive cash compensation and reported earnings.However,if effective inter-nal controls provide information on the sensitivity-precision of earn-ings,then an ICMW report has the potential to impact the strength of the relation between accounting earnings and executive compensa-tion.We predict that the relation between executive compensation and accounting earnings is lower forfirms that report ICMW.This leads to ourfirst hypothesis:H1.Firms that report ICMW have lower accounting earnings–executive compensation relation thanfirms with NOMW.5.The impact of ICMW type on theearnings–compensation relationDepending on the underlying cause of the ICMW,additional monitor-ing mechanisms or substantive testing can mitigate the negative effects of poor internal controls and impact the weight placed on earnings in exec-utive compensation contracts.Consistent with Doyle et al.(2007a,b),we categorize ICMW disclosures into two categories that may have different impacts on the earnings–compensation relation.First,Account-Specific (AS)material weaknesses arise from routinefirm operations and relate to controls over specific account balances,such as accounts receivable,in-ventory,and legal proceedings,or transaction-level processes.When AS material weaknesses are identified,executives or auditors can easily audit around them by performing additional substantive procedures.In contrast,Company-Level(CL)material weaknesses reflect is-sues beyond the direct control of the executives and relate to more macro-level controls such as the control environment,general per-sonnel training,organizational-level accountability,or the overallfi-nancial reporting processes.Due to the pervasiveness of CL material weaknesses,the scope of audit efforts needs to be frequently expand-ed to deal with these more serious concerns regarding the reliability of thefinancial statement(Moody's,2006;PCAOB,2004).We there-fore expect a more negative association between disclosures of Company-Level material weaknesses and the weights of earnings in the compensation contract relative to Account-Specific weaknesses. pany-Level ICMW have a stronger negative association with the accounting earnings–executive compensation relation than do Account-Specific ICMW.6.Sample selection and research design6.1.Sample selectionWe use several sources of data:(1)Audit Analytics,(2)Compu-stat,(3)CRSP,(4)ExecuComp,(5)firms'financial statements and (6)Lexis-Nexis Academic Universe.We start by collecting data from Section404disclosures of auditors'opinions on ICMW overfinancial reporting fromfirms'Form10-Kfilings from January2004to Decem-ber2006.4To ensure that the identified acceleratedfilers under SOX 404pertain to a material weakness in internal control,we follow up our initial search offirms that receive adverse opinions on their ICMW in the Audit Analytics database with a manual check through Lexis-Nexis.For our sample period,we identify9899observations 4According to PCAOB(Standard No.2),three types of internal control overfinancial reporting exist:(1)a control deficiency,(2)a significant deficiency,or(3)material weaknesses.Since publicfirms are only required to disclose material weaknesses in Section404,our main empirical test uses all Section404reports available on Audit An-alytics,which includes3864firm-year observations from2004to2006.77K.E.Brown,J.-H.Lim/Advances in Accounting,incorporating Advances in International Accounting28(2012)75–87with clean reports and1399observations that received adverse opin-ions on theirfinancial reporting with at least one type of internal con-trol problem as a material weakness.After controlling for duplicates or non-acceleratedfilers from2004to2006,we validate1336adverse reports and9865clean reports for a total of11,201observations.5 Our research design examines the change in executive compensa-tion in the year following the ICMW.For ICMW in the years 2004–2006,we require compensation data for2004–2007to compute the change in compensation.We eliminate513ICMW and5999 NOMW(532ICMW and6415NOMW)firm-year observations for the CEO(CFO)due to missing salary and bonus data in thefirms'proxy statements or in ExecuComp.Next,we collect stock return and account-ing information from CRSP and Compustat respectively,resulting in a loss of399ICMW and129NOMW(405ICMW and223NOMW)firm-year observations for the CEO(CFO).Finally,we excludefirm-year observations from utility andfinancialfirms because these companies operate in unique regulatory environments that are likely to influence executive compensation.Thefinal sample for this study consists of 391(366)firm-year observations with ICMW and3648(3138)firm-year observations with NOMW for the CEO(CFO).We summarize our sample selection process in panel A of Table1.Panel B of Table1summarizes the ICMW subsample by the type of weakness.Following the recommendations of the PCAOB's Standard No.2and Moody's(Doss&Jonas,2004;Doyle et al.,2007a,b),two types of material weaknesses can be classified based on different ob-jectives.Our study identifiesfirms as having either an Account-Specific(AS)or Company-Level(CL)material weakness.6Of the391 (366)firm-year observations of ICMW reported subsample,162 (140)were Company-Level and229(226)were Account-Specific weaknesses for the CEO(CFO)subsample.Panel C of Table1summarizes the industry distribution of the CEO sample of391firm-years with ICMW and the3648firm-years without some kind of material weakness based on their two-digit SIC codes.The391ICMWfirm-years cover six industry groups.Among them,the Ser-vices industry has the highest number offirms,followed by the Machin-ery,Construction and manufacturing,and the Wholesale and retail industries.The industry distribution for3648NOMWfirm-years has the highest number of observations in the Services industry followed closely by the Machinery and Construction and manufacturing industries.6.2.Research designTo test H1,that predicts a lower strength in the earnings–compensation relation when thefirm reports an ICMW under Section 404of SOX,we use the following model(1):ΔCashComp i;t¼β0þβ1ICMW i;t−1þβ2ΔROA i;tþβ3RET i;tþβ4ICMW i;t−1⁎ΔROA i;tþβ5ICMW i;t−1⁎RET i;tþβ6LOSS i;tþβ7LOSS i;t⁎ΔROA i;tþβ8LOSS i;t⁎RET i;tþβ9SIZE i;tþβ10MTB i;tþβ11CEOCHAIR i;tþβ12IndAC i;tþβ13IndBD i;tþβ14BoardMeetings i;tþβ15AcctExp i;tþYEARþINDþεtð1ÞConsistent with prior studies,we focus on cash compensation be-cause almost allfirms explicitly use accounting measures as a deter-minant of cash bonus(Murphy,2000;Core,Guay,&Larcker,2003; Huson,et al.2012).No such evidence exists for the association be-tween equity-based pay and earnings because stock option grants are offered not only to reward executives but also to introduce con-vexity in executive compensation contracts,for retention purposes and because of tax andfinancial reporting costs(Core et al.,2003).7 Similar to Hoitash et al.(2009),our dependent variable is the change in salary plus bonus pay from year t−1to year t deflated by the beginning of the year salary in year t(ΔCashComp).Our indepen-dent variables are(1)an internal controls deficiency indicator vari-able(ICMW or CL/AS)at time t−1,(2)the percentage change in return on assets(ΔROA),(3)stock returns(RET)and(4)interaction terms between the internal controls deficiency indicator and the per-centage change in both ROA and stock returns.We include annual stock returns for the year t in the model specification because a mean-ingful association between compensation andfirm performance must include returns(Murphy,1998;Sloan,1993).The parameter estimateβ4on ICMW∗ΔROA examines H1.Under the null hypothesis that the sensitivity of executive compensation to earnings is unaffected by internal control deficiency,the parameter,β4would be insignificant.However,we predict that the weight assigned to earnings will decrease with ICMW disclosures.As such, we expect the estimateβ4for the interaction ICMW∗ΔROA to be neg-ative and significant.It is possible that,with poor internal controls,5After controlling for97duplicates or non-acceleratedfilers from2004to2006,we validate1336adverse reports(relating to452firms in2004;487firms in2005;and 397firms in2006)and9865clean reports(2478firms in2004;3478firms in2005; and3909firms in2006)for a total of11,201observations.6We want to thank one of the authors(Doyle et al.,2007a,b)for the conceptual foundations of these two types of ICMW and the coding validations.Our classification of a Company-Level vs.an Account-Specific material weakness is mutually exclusive. For example,if afirm has both Company-Level weaknesses and Account-Specific ma-terial weaknesses,then we code thefirm as having a Company-Level weakness(Doyle et al.,2007a).In addition,three or more Account-Specific material weaknesses are cod-ed Company-level Weaknesses(Doyle et al.,2007b,p.1149).Table1Sample selection and descriptive statistics(January2004–December2007).Panel A:Sample selection and sample compositionCEO subsample CFO subsampleICMW NOMW10-Kfilings from Audit Analytics andsubsequent manual review2004–20061399989913999899 Less:Duplicate observations(63)(34)(63)(34) SOX404disclosure data2004–20061336986513369865 Missing compensation data2004–2007(513)(5999)(532)(6415) Missing Compustat&CRSP data2004–2007(399)(129)(405)(223) Less:Utilities&financialfirms2004–2007(33)(89)(33)(89) Testing H1&H239136483663138 Panel B:Samplefirms by types of ICMWCompany-Level(CL)Account-Specific(AS)CEO CFO CEO CFO 20046860108107 200560528886 200634283333 Panel C:Sample composition by industry(CEO sample)2-digitSICIndustry description ICMW NOMW#Obs.%#Obs.%10–13Mining11 2.8171 4.7 23–34Construction and manufacturing6516.677521.2 35–39Machinery9724.879721.8 42–49Transportation and utilities4411.349413.5 50–59Wholesale and retail4912.539210.7 72–87Services12532.0101927.97For these reasons any inferences that we draw about internal control weaknesses and the use of earnings from tests using equity based compensation would be incon-clusive.Additionally,the importance of earnings in determining executive cash based pay has increased in the post-Sox era,which is the period that we examine(Carter et al.,2009).78K.E.Brown,J.-H.Lim/Advances in Accounting,incorporating Advances in International Accounting28(2012)75–87。
General English Admission Test For Non-English MajorPh.D. program(Harbin Institute of Technology)Passage OneQuestions 1-7 are based on the following passage:According to a recent theory, Archean-age gold-quartz vein systems were formed over two billion years ago from magmatic fluids that originated from molten granitelike bodies deep beneath the surface of the Earth. This theory is contrary to the widely held view that the systems were deposited from metamorphic fluids, that is, from fluids that formed during the dehydration of wet sedimentary rocks. The recently developed theory has considerable practical importance. Most of the gold deposits discovered during the original gold rushes were exposed at the Earth’s surface and were found because they had shed trails of alluvial gold that were easily traced by simple prospecting methods. Although these same methods still leas to an occasional discovery, most deposits not yet discovered have gone undetected because they are buried and have no surface expression.The challenge in exploration is therefore to unravel the subsurface geology of an area and pinpoint the position of buried minerals. Methods widely used today include analysis of aerial images that yield abroad geological overview, geophysical techniques that provide data on the magnetic, electrical, and mineralogical properties of the rocks being investigated, and sensitive chemical tests that are able to detect : the subtle chemical halos that often envelop mineralization. However, none of these high-technology methods are of any value if the sites to which they are applied have never mineralized, and to maximize the chances of discovery the explorer must therefore pay particular attention to selecting the ground formations most likely to be mineralized. Such ground selection relies to varying degrees on conceptual models, which take into account theoretical studies of relevant factors.These models are constructed primarily from empirical observations of known mineral deposits and from theories of ore-forming processes. The explorer uses the models to identify those geological features that are critical to the formation of the mineralization being modeled, and then tries to select areas for exploration that exhibit as many of the critical features as possible.1. The author is primarily concerned with .A. advocating a return to an older methodology.B. explaining the importance of a recent theory.C. enumerating differences between two widely used methodsD. describing events leading to a discovery2. According to passage, the widely held view of Archean-age gold-quartz vein systems is that such systemsA were formed from metamorphic fluids.B originated in molten granitelike bodiesC were formed from alluvial depositsD generally have surface expression3. The passage implies that which of the following steps would bethe first performed by explorers who wish to maximize their chances of discovering gold?A Surveying several sites known to have been formed morethan two billion years ago.B Limiting exploration to sites known to have been formedform metamorphic fluid.C Using an appropriate conceptual model to select a site forfurther exploration.D Using geophysical methods to analyze rocks over a broadarea.4. Which of the following statements about discoveries of gold deposits is supported by information in the passage?A The number of gold discover made annually has increasedbetween the time of the original gold rushes and the presentB New discoveries of gold deposits are likely to be the resultof exploration techniques designed to locate buriedmineralizationC It is unlikely that newly discovered gold deposits will everyield as much as did those deposits discovered during theoriginal gold rushes.D Modern explorers are divided on the question of the utilityof simple prospecting methods as a source of newdiscoveries of gold deposits.5. It can be inferred from the passage that which of the following is easiest to detect?A A gold-quartz vein system originating in magma tic fluids.B A gold-quartz vein system originating in metamorphic fluids.C A gold deposit that is mixed with granite.D A gold deposit that has shed alluvial gold.6. The theory mentioned in line I relates to the conceptualmodels discussed in the passage in which of the followingways?A It may furnish a valid account of ore-forming processes,and hence, can support conceptual models that have greatpractical significance.B It suggests that certain geological formations, long believedto be mineralized, are in fact mineralized thus confirming current conceptual models.C. It suggests that there may not be enough similarity acrossArchean-age gold-quartz vein systems to warrant the formulation of conceptual models.D It corrects existing theories about the chemical halos ofgold deposits, and thus provides a basis for correcting current conceptual models.7. According to the passage methods of exploring for gold thatare widely used today are based on which of the following facts?A Most of the Earth’s remaining gold deposits are stillmolten.B Most of the Earth’s remaining gold deposits are exposedat the surface.C Most of the Earth’s remaining gold deposits are buriedand have no surface expressionD Only one type of gold deposit warrants exploration. sincethe other types of gold deposits are found in regions difficult to reachPassage TwoQuestions 8-15 are based on the following passage:In choosing a method for determining climatic conditions that existed in the past, paleoclimatologists invoke four principal criteria. First, the material—rocks, lakes, vegetation, etc.—on which the method relies must be widespread enough to provide plenty of information, since analysis of material that is rarely encountered will not permit correlation with other regions or with other periods of geological history. Second in the process of formation, the material must have received an environmental signal that reflects a change in climate and that can be deciphered by modern physical or chemical means. Third, at least some of the material must have retained the signal unaffected by subsequent changes in the environment. Fourth, it must be possible to determine the time at which the inferred climatic conditions held. This last criterion is more easily met in dating marine sediments, because dating of only a small number of layers in a marine sequence allows the age of other layers to be estimated fairly reliably by extrapolation and interpolation. By contrast, because sedimentation is much less continuous in continental regions, estimating the age of a continental bed from the known ages of beds above and below is more risky.One very old method used in the investigation of past climatic conditions involves the measurement of water levels in ancient lakes.In temperate regions, there are enough lakes for correlations between them to give us a tenable picture. In arid and semiarid regions, on the other hand, the small number of lakes and the great distances between them reduce the possibilities for correlation. Moreover, since lake levels are controlled by rates of evaporation as well as by precipitation, the interpretation of such levels is ambiguous. For instance, the fact that lake levels in the semiarid southwestern United States appear to have been higher during the last ice age than they are now was at one time attributed to increased precipitation. On the basis of snowline elevations, however, it has been concluded that the climate then was not necessarily wetter than it is now, but rather that both summers and winters were cooler, resulting in reduced evaporationAnother problematic method is to reconstruct former climates on the basis of pollen profiles. The type of vegetation in a specific region is determined by identifying and counting the various pollen grains found there. Although the relationship between vegetation and climate is not as direct as the relationship between climate and lake levels, the method often works well in the temperate zones. In arid and semiarid regions in which there is not much vegetation, however, small changes in one or a few plant types can change the picture traumatically, making accurate correlations between neighboring areas difficult to obtain.8. Which of the following statements about the difference betweenmarine and continental sedimentation is supported by information in the passage?A.Data provided by dating marine sedimentation is moreconsistent with researchers’ findings in other disciplines thanis data provided by dating continental sedimentation.B.It is easier to estimate the age of a layer in a sequence ofcontinental sedimentation than it is to estimate the age of alayer in a sequence of marine sedimentation.C.Marine sedimentation is much less widespread than continentalsedimentationD.Marine sedimentation is much more continuous than iscontinental sedimentation.9. Which of the following statements best describes the organization of the passage as a whole?A.The author describes a method for determining past climaticconditions and then offers specific examples of situations inwhich it has been used.B.The author discusses the method of dating marine andcontinental sequences and then explains how dating is moredifficult with lake levels than with pollen profiles.C.The author describes the common requirements of methodsfor determining past climatic conditions and then discusses examples of such methods.D.The author describes various ways of choosing a material fordetermining past climatic conditions and then discusses how two such methods have yielded contradictory data.10. It can be inferred from the passage that paleoclimatologistshave concluded which of the following on the basis of their study of snow-line elevations in the southwest6ern United States?A.There is usually more precipitation during an ice age because ofincreased amounts of evaporationB.There was less precipitation during the last ice age than there istoday.ke levels in the semiarid southwestern United States werelower during the last ice age than they are today.D.The high lake levels during the last ice age may have been aresult of less evapo9ration rather than more precipitation.11. Which of the following would be the most likely topic for aparagraph that logically continues the passage?A.The kinds of plants normally found in arid regions.B.The effect of variation in lake levels on pollen distribution.C.The material best suited to preserving signal of climaticchanges.D.A third method fro investigating past climatic conditions.12. the author discusses lake levels in the southwestern United States in order toA.illustrate the mechanics of the relationship between lake level,evaporation, and precipitationB.provide an example of the uncertainty involved in interpretinglake levels.C.Prove that there are not enough ancient lakes with which tomake accurate correlationsD.Explain the effects of increased rates of evaporation on levelsof precipitation.13. It can be inferred from the passage that an environmental signalfound in geological material would no be useful to paleoclimatologists if it .A.had to be interpreted by modern chemical meansB.reflected a change in climate rather than a long-term climaticconditionC.was incorporated into a material as the material was formingD.also reflected subsequent environmental changes.14. According to the passage the material used to determine pastclimatic conditions must be widespread for which of thefollowing reasons?Ⅰ.Paleoclimatologists need to make comparisons between periods of geological history.Ⅱ. Paleoclimatologists need to compare materials that have supporteda wide variety of vegetationⅢ. Paleoclimatologists need to make comparisons with data collected in other regions.A.I onlyB.ⅡonlyC.I and ⅡonlyD.I and Ⅲonly15. Which of the following can be inferred from the passage aboutthe study of past climates in arid and semiarid regions?A.It is sometimes more difficult to determine past climaticconditions in arid and semiarid regions than in temperateregionsB.Although in the past more research has been done ontemperate regions, paleoclimatologists have recently turnedtheir attention to arid and semiarid regions.C.Although more information about past climates can begathered in arid and semiarid than in temperate regions, datingthis information is more difficult.D.It is difficult to study the climatic history of arid and semiaridregions because their climates have tended to vary more thanthose of temperate regions.Passage ThreeQuestions 16-22 are based on the following passage:While there is no blueprint for transforming a largely government-controlled economy into a free one, the experience of the United Kingdom since 1979 clearly shows one approach that works: privatization, in which state-owned industries are sold to private companies. By 1979, the total borrowings and losses of state-owned industries were running at about £3 billion a year. By selling many of these industries, the government has decreased these borrowings and losses, gained over £34 billion from the sales, and now receives tax revenues from the newly privatized companies. Along with a dramatically improved overall economy, the government has been able to repay 12.5 percent of the net national debt over a two-year period.In fact privatization has not only rescued individual industries and a whole economy headed for disaster, but has also raised the level of performance in every area. At British Airways and British Gas, for example, productivity per employee has risen by 20 percent. At associated British Ports. labor d isruptions common in the 1970’s and early 1980’s have now virtually disappeared. At British Telecom,there is no longer a waiting list—as there always was before privatization—to have a telephone installed.Part of this improved productivity has come about because the employees of privatized industries were given the opportunity to buy shares in their own companies. They responded enthusiastically to the offer of shares; at British Aerospace 89 percent of the eligible work force bought shares; at Associated British Ports 90 percent; and at British Telecom 92 percent. When people have a personal stake in something, they think about it, care about it, work to make it prosper. At the National Freight Consortium, the new employee-owners grew so concerned about t heir company’s profits that during wage negotiations they actually pressed their union to lower its wage demands. Some economists have suggested that giving away free shares would provide a needed acceleration of the privatization process. Yet they miss Th omas Paine’s point that “what we obtain too cheap we esteem too lightly” In order for the far-ranging benefits of individual ownership to be achieved by owners, companies, and countries, employees and other individuals must make their own decisions to buy, and they must commit some of their own resources to the choice.16. According to the passage all of the following were benefits ofprivatizing state owned industries in the United KingdomEXCEPTA.Privatized industries paid taxes to the governmentB.The government gained revenue from selling state-ownedindustriesC.The government repaid some of its national debtD.Profits from industries that were still state-owned increased17. According to the passage, which of the following resulted inincreased productivity in companies that have been privatized?A.A large number of employees chose to purchase shares in theircompanies.B.Free shares were widely distributed to individual shareholders.C.The government ceased to regulate major industries.D.Unions conducted wage negotiations fro employees.18. It can be inferred from the passage that the author considers labor disruptions to beA.an inevitable problem in a weak national economyB.a positive sign of employee concern about a companyC.a predictor of employee reactions to a company’s offer to sellshares to themD.a deterrence to high performance levels in an industry.19. The passage supports which of the following statements aboutemployees buying shares in their won companies?A.At three different companies, approximately nine out ten of theworkers were eligible to buy shares in their companies.B.Approximately 90%of the eligible workers at three differentcompanies chose to buy shares in their companies. C.The opportunity to buy shares was discouraged by at least somelabor unions.panies that demonstrated the highest productivity were thefirst to allow their employees the opportunity to buy shares. 20. Which of the following statements is most consistent with the principle described in L25-26?A.A democratic government that decides it is inappropriate toown a particular industry has in no way abdicated its responsibilities as guardian of the public interest.B.The ideal way for a government to protect employee interests isto force companies to maintain their share of a competitive market without government subsidies.C.The failure to harness the power of self-interest is an importantreason that state-owned industries perform poorlyernments that want to implement privatization programsmust try to eliminate all resistance to the free-market system. 21. Which of the following can be inferred from the passage aboutthe privatization process in the United Kingdom?A.It depends to a potentially dangerous degree on individualownership of shares.B.It conforms in its mos t general outlines to Thomas Paine’sprescription for business ownership.C.It was originally conceived to include some giving away of freeshares.D.It is taking place more slowly than some economists suggest isnecessary.22. The quotation in L32-33 is most probably used to .A.counter a position that the author of the passage believes isincorrect.B.State a solution to a problem described in the previous sentence.C.Show how opponents of the viewpoint of the author of thepassage have supported their arguments.D.point out a paradox contained in a controversial viewpoint.Passage FourQuestions 23-30 are based on the following passage:Historians of women’s labor in the United States at first largely disregarded the story of female service workers—women earning wages in occupations such as salesclerk, domestic servant, and office secretary. These historians focused instead on factory work, primarily because it seemed so different from traditional,unpaid “women’s work ”in the home, and because the underlying economic forces of industrialism were presumed to be gender-blind and hence emancipation in effect. Unfortunately, emancipation has been less profound than expected, for not even industrial wage labor has escaped continued sex segregation in the workplace.To explain this unfinished revolution in the status of women, historians have recently begun to emphasize the way a prevailing definition of femininity often determines the kinds of work allocated to women, even when such allocation is inappropriate to new conditions. For instance, early textile-mill entrepreneurs, in justifying women’s employment in wage labor, made much of the assumption that women were by nature skillful at detailed tasks and patient in carrying out repetitive chores; the mill owners thus imported into the new industrial order hoary stereotypes associated with the homemaking activities they presumed to have been the purview of women. Because women accepted the more unattractive new industrial tasks more readily than did men, such jobs came to be regarded as female jobs. And employers, who assumed that women’s “real” aspirations were for marriage and family life, declined to pay women wages commensurate with those of men. Thus many lower-skilled, lower-paid, less secure jobs came to beperceived as “female.”More remarkable than the origin has been the persistence of such sex segregation in twentieth-century industry. Once an occupation came to be perceived as “female”, employers showed surprisingly little interest in changing that perception, even when higher profits beckoned. And despite the urgent need of the United States during the Second World War to mobilize its human resources fully, job segregation by sex characterized even he most important war industries. Moreover, once the war ended, employers quickly returned to men most of the “male” jobs that women had been permitted to master.23. According to the passage, job segregation by sex in the United States was.A.greatly diminlated by labor mobilization during the SecondWorld War.B.perpetuated by those textile-mill owners who argued in favorof women’s employment in wage laborC.one means by which women achieved greater job securityD.reluctantly challenged by employers except when theeconomic advantages were obvious24. According to the passage, historians of women’s laborfocused on factory work as a more promising area ofresearch than service-sector work because factory workA.involved the payment of higher wagesB.required skill in detailed tasksC.was assumed to be less characterized by sex segregationD.was more readily accepted by women than by men25. It can be inferred from the passage the early historians ofwomen’s labor in the United States paid little attention to women’s employment in the service sector of the economy becauseA.fewer women found employment in the service sector than infactory workB.the wages paid to workers in the service sector were muchmore short-term than in factory workC.women’s employment in the service sector tended to bemuch more short-term than in factory workD.employment in the service sector seemed to have much incommon with the unpaid work associated with homemaking 26. The passage supports which of the following statements aboutthe early mill owners mentioned in the second paragraph? A.They hoped that by creating relatively unattractive“female” jobs they would discourage women from losing interest in marriage and family life.B.They sought to increase the size of the available labor forceas a means to keep men’s wages low.C.They argued that women were inherently suited to do well inparticular kinds of factory workD.They felt guilty about disturbing the traditional division oflabor in family.27.It can be inferred from the passage that the “unfinishedrevolution” the author mentions in L11 refers to theA.entry of women into the industrial labor market.B.Development of a new definition of femininity unrelated tothe economic forces of industrialismC.Introduction of equal pay for equal work in all professionsD.Emancipation of women wage earners fromgender-determined job allocation28. The passage supports which of the following statements about hiring policies in the United States?A.After a crisis many formerly “male ”jobs are reclassified as“female” jobs.B.Industrial employers generally prefer to hire women withprevious experience as homemakersC.Post-Second World War hiring policies caused women to losemany of their wartime gains in employment opportunity.D.Even war industries during the Second World War werereluctant to hire women for factory work.29. Which of the following words best expresses the opinion ofthe author of the passage concerning the notion that womenare more skillful than men in carrying out details tasks?A.“patient” (line17)B.“repetitive” (line18)C.“hoary” (line19)D.“homemaking” (line19)30. Which of the following best describes the relationship of thefinal paragraph to the passage as a whole?A.The central idea is reinforced by the citation of evidence drawnfrom twentieth-century history.B.The central idea is restated in such a way as to form a transitionto a new topic for discussionC.The central idea is restated and juxtaposed with evidence thatmight appear to contradict it.D.A partial exception to the generalizations of the central idea isdismissed unimportant.Passage FiveQuestions 31-36 are based on the following passage:Two modes of argumentation have been used on behalf ofwomen’s emancipation in Western societies. Arguments in what could be called the “relational” feminist tradition maintain the doctrine of “equality in difference”, or equity as distinct for equality. They posit that biological distinctions between the sexes result in a necessary sexual division of labor in the family and throughout society and that women’s procreative labor is cu rrently undervalued by society, to the disadvantage of women. By contrast, the individualist feminist tradition emphasizes individual human rights and celebrates women’s quest for personal autonomy, while downplaying the importance of gender roles and minimizing discussion of childbearing and its attendant responsibilities.Before the late nineteenth century, these views coexisted within the feminist movement, often within the writings of the same individual. Between 1890and 1920, however, relational feminism, which had been the dominant strain in feminist thought, and which still predominates among European and non-western feminists, lost ground in England and the United States. Because the concept of individual rights was already well established in the Anglo-Saxon legal and political tradition, individualist feminism came to predominate in England-speaking countries. At the same time, the goals of the two approaches began to seem increasingly irreconcilable. Individualist feminists began to advocate a totally gender-blind system with equaleducational and economic opportunities outside the home should be available for all women, continued to emphasize women’s special contributions to society as homemakers and mothers; they demanded special treatment including protective legislation for women workers. State-sponsored maternity benefits, and paid compensation for housework.Relational arguments have a major pitfall: because they underline women’s physiological and psychological distinctiveness, they are often appropriated by political adversaries and used to endorse male privilege. But the individualist approach, by attacking gender roles, denying the significance of physiological difference, and condemning existing familial institutions as hopelessly patriarchal, has often simply treated as irrelevant the family roles important to many women. If the individualist framework, with its claim for women’s autonomy, could be harmonized with the family-oriented concerns of relational feminists, a more fruitful model for contemporary feminist politics could emerge.31. The author of the passage alludes to the well-established natureof the concept of individual rights in the Anglo-Saxon legal andpolitical tradition in order toA.illustrate the influence of individualist feminist thought on moregeneral intellectual trends in English history.B.Argue that feminism was already a part of the largerAnglo-Saxon intellectual tradition, even though this has often gone unnoticed by critics of women’s emancipationC.Explain the decline in individualist thinking among feminists innon-English-speaking countries.D.Help account for an increasing shift toward individualistfeminism among feminists in English-speaking countries.32. The passage suggests that the author of the passage believes which of the following?A.The predominance of individualist feminism inEnglish-speaking countries is a historical phenomenon, the causes of which have not yet been investigated.B.The individualist and relational feminist views are irreconcilable,given their theoretical differences concerning the foundations of society.C.A consensus concerning the direction of future feminist politicswill probably soon emerge, given the awareness among feminists of the need for cooperation among women.D.Political adversaries of feminism often misuse argumentspredicated on differences between the sexes to argue that the existing social system should be maintained.33. It can be inferred from the passage that the individualist。
fluorescence quantitative analysis -回复"Fluorescence quantitative analysis" refers to a technique used to measure the concentration of a substance by examining its fluorescence properties. This technique is commonly employed in various scientific fields such as chemistry, biology, and environmental science. In this article, we will explore the principles behind fluorescence quantitative analysis, the instruments used, and some applications of this technique.1. Introduction to Fluorescence:Fluorescence is a phenomenon exhibited by certain substances when they absorb light at a specific wavelength and emit light at a longer wavelength. This emission of light is called fluorescence. It occurs due to the excitation of electrons in the atoms or molecules of the substance.2. Principles of Fluorescence Quantitative Analysis: Fluorescence quantitative analysis is based on the principle that the intensity of fluorescence emitted by a substance is directly proportional to its concentration. This principle forms the basis for detecting and measuring the concentration of various substances.3. Instrumentation for Fluorescence Quantitative Analysis:a. Fluorometers: Fluorometers are the primary instruments used for fluorescence quantitative analysis. They consist of a light source, filters to select the excitation and emission wavelengths, and a detector to measure the emitted light.b. Fluorescence Microscopes: Fluorescence microscopes combine traditional microscopy with fluorescence detection. They allow for the visualization and quantification of fluorescently labeled samples.c. Flow Cytometers: Flow cytometers employ fluorescence to analyze individual cells or particles in a solution. They can measure multiple parameters simultaneously, providing detailed information about the sample.4. Process of Fluorescence Quantitative Analysis:a. Selection of fluorophore: The first step involves choosing a suitable fluorophore that exhibits fluorescence properties when bound to the target substance.b. Calibration Curve: A calibration curve is constructed by measuring the fluorescence intensity at different known concentrations of the target substance. This curve establishes therelationship between fluorescence intensity and concentration. c. Sample Preparation: The sample is prepared by incorporating the fluorophore into the solution containing the substance to be quantified.d. Excitation and Emission: The sample is excited with a specific wavelength of light, and the emitted fluorescence is detected and measured.e. Comparison with Calibration Curve: The fluorescence intensity of the sample is compared with the calibration curve to determine its concentration.5. Applications of Fluorescence Quantitative Analysis:a. Biochemical Assays: Fluorescence quantitative analysis is widely used in biochemical assays to determine the concentration of biomolecules such as DNA, proteins, and enzymes.b. Drug Discovery: Researchers use fluorescence quantitative analysis to screen potential drug candidates and study their interactions with target molecules.c. Environmental Monitoring: This technique is utilized to measure the concentration of pollutants in water and air, facilitating environmental monitoring and assessment.d. Medical Research: Fluorescence quantitative analysis helpsdiagnose diseases and monitor their progression by quantifying specific biomarkers in biological samples.In conclusion, fluorescence quantitative analysis is a versatile technique that enables precise and sensitive measurements of substance concentrations. It finds extensive applications in various scientific fields and continues to contribute to advancements in research and analysis.。
Variations in the ClimateOne of the most difficult aspects of deciding whether current climatic events reveal evidence of the impact of human activities is that it is hard to get a measure of what constitutes the natural variability of the climate. We know that over the past millennia the climate has undergone major changes without any significant human intervention. We also know that the global climate system is immensely complicated and that everything is in some way connected, and so the system is capable of fluctuating in unexpected ways. We need therefore to know how much the climate can vary of its own accord in order to interpret with confidence the extent to which recent changes are natural as opposed to being the result of human activities.Instrumental records do not go back far enough to provide us with reliable measurements of global climatic variability on timescales longer than a century. What we do know is that as we include longer time intervals, the record shows increasing evidence of slow swings in climate between different regimes. To build up a better picture of fluctuations appreciably further back in time requires us to use proxy records.Over long periods of time, substances whose physical and chemical properties change with the ambient climate at the time can be deposited in a systematic way to provide a continuous record of changes in those properties overtime, sometimes for hundreds or thousands of years. Generally, the layering occurs on an annual basis, hence the observed changes in the records can be dated. Information on temperature, rainfall, and other aspects of the climate that can be inferred from the systematic changes in properties is usually referred to as proxy data. Proxy temperature records have been reconstructed from ice core drilled out of the central Greenland ice cap, calcite shells embedded in layered lake sediments in Western Europe, ocean floor sediment cores from the tropical Atlantic Ocean, ice cores from Peruvian glaciers, and ice cores from eastern Antarctica. While these records provide broadly consistent indications that temperature variations can occur on a global scale, there are nonetheless some intriguing differences, which suggest that the pattern of temperature variations in regional climates can also differ significantly from each other.What the proxy records make abundantly clear is that there have been significant natural changes in the climate over timescales longer than a few thousand years. Equally striking, however, is the relative stability of the climate in the past 10,000 years (the Holocene period).To the extent that the coverage of the global climate from these records can provide a measure of its true variability, it should at least indicate how all the natural causes of climate change have combined. These include the chaotic fluctuations of the atmosphere, the slower but equally erratic behavior of the oceans, changes in the land surfaces, and the extent of ice and snow. Also included will be any variations that have arisen from volcanic activity, solar activity, and, possibly, human activities.One way to estimate how all the various processes leading to climate variability will combine is by using computer models of the global climate. They can do only so much to represent the full complexity of theglobal climate and hence may give only limited information about natural variability. Studies suggest that to date the variability in computer simulations is considerably smaller than in data obtained from the proxy records.In addition to the internal variability of the global climate system itself, there is the added factor of external influences, such as volcanoes and solar activity. There is a growing body of opinion that both these physical variations have a measurable impact on the climate. Thus we need to be able to include these in our deliberations. Some current analyses conclude that volcanoes and solar activity explain quite a considerable amount of the observed variability in the period from the seventeenth to the early twentieth centuries, but that they cannot be invoked to explain the rapid warming in recent decades.Paragraph 1: One of the most difficult aspects of deciding whether current climatic events reveal evidence of the impact of human activities is that it is hard to get a measure of what constitutes the natural variability of the climate. We know that over the past millennia the climate has undergone major changes without any significant human intervention. We also know that the global climate system is immensely complicated and that everything is in some way connected, and so the system is capable of fluctuating in unexpected ways. We need therefore to know how much the climate can vary of its own accord in order to interpret with confidence the extent to which recent changes are natural as opposed to being the result of human activities.1. According to paragraph 1, which of the following must we find out in order to determine the impact of human activities upon climate?○ The major changes in climate over the past millennia○ The degree to which the climate varies naturally○ The best method for measuring climatic change○ The millennium when humans began to interfere with the climateParagraph 2: Instrumental records do not go back far enough to provide us with reliable measurements of global climatic variability on timescales longer than a century. What we do know is that as we include longer time intervals, the record shows increasing evidence of slow swings in climate between different regimes. To build up a better picture of fluctuations appreciably further back in time requires us to use proxy records.2. According to paragraph 2, an advantage of proxy records over instrumental records is that○ they are more-reliable measures of climatic variability in the past century○ they provide more-accurate measures of local temperatures○ they provide information on climate fluctuations further back in time○ they reveal information about the human impact on the climateParagraph 3:dated. Information on temperature, rainfall, and other aspects of the climate that can be inferred from the systematic changes in properties is usually referred to as proxy data. Proxy temperature records have been reconstructed from ice core drilled out of the central Greenland ice cap, calcite shells embedded in layered lake sediments in Western Europe, ocean floor sediment cores from the tropical Atlantic Ocean, ice cores from Peruvian glaciers, and ice cores from eastern Antarctica. While these records provide broadly consistent indications that temperature variations can occur on a global scale, there are nonetheless some intriguing differences, which suggest that the pattern of temperature variations in regional climates can also differ significantly from each other.of the sentences below best expresses the essential information in theinformation.○ Because physical and chemical properties of substances are unchanging, they are useful records of climate fluctuations over time.○ For hundreds or thousands of years, people have been observing changes in the chemical and physical properties of substances in order to infer climate change.○ Because it takes long periods of time for the climate to change, systematic changes in the properties of substances are difficult to observe.○ Changes in systematically deposited substances that are affected by climate can indicate climate variations over time.4. According to paragraph 3, scientists are able to reconstruct proxy temperature records by○ studying regional differences in temperature variations○ studying and dating changes in the properties of substances○ observing changes in present day climate conditions○ inferring past climate shifts from observations of current climatic changesParagraph 4: What the proxy records make abundantly clear is that there have been significant naturalrelative stability of the climate in the past 10,000 years (the Holocene period).in the passage is closest in meaning to○ noticeable○ confusing○ true○ unlikelyParagraph 3: Over long periods of time, substances whose physical and chemical properties change withchanges in those properties overtime, sometimes for hundreds or thousands of years. Generally, the layering occurs on an annual basis, hence the observed changes in the records can be dated. Information on temperature, rainfall, and other aspects of the climate that can be inferred from the systematic changes in properties is usually referred to as proxy data. Proxy temperature records have been reconstructed from ice core drilled out of the central Greenland ice cap, calcite shells embedded in layered lake sediments in Western Europe, ocean floor sediment cores from the tropical Atlantic Ocean, ice cores from Peruvian glaciers, and ice cores from eastern Antarctica. While these records provide broadly consistent indications that temperature variations can occur on a global scale, there are nonetheless some intriguing differences, which suggest that the pattern of temperature variations in regional climates can also differ significantly from each other.Paragraph 4: What the proxy records make abundantly clear is that there have been significant natural changes in the climate over timescales longer than a few thousand years. Equally striking, however, is the relative stability of the climate in the past 10.000 years (the Holocene period).6. According to paragraphs 3 and 4, proxy data have suggested all of the following about the climate EXCEPT:○ Regional climates may change overtime.○ The climate has changed very little in the past 10,000 years.○ Global temperatures vary more than regional temperatures.○ Important natural changes in climate have occurred over large timescales.Paragraph 5: To the extent that the coverage of the global climate from these records can provide a measure of its true variability, it should at least indicate how all the natural causes of climate change haveof the oceans, changes in the land surfaces, and the extent of ice and snow. Also included will be any variations that have arisen from volcanic activity, solar activity, and, possibly, human activities.7. The in the passage is closest in meaning to○ dramatic○ important○ unpredictable○ common8. All of the following are mentioned in paragraph 5 as natural causes of climate change EXCEPT○ atmospheric changes○ the slow movement of landmasses○ fluctuations in the amount of ice and snow○ changes in ocean activityParagraph 6: One way to estimate how all the various processes leading to climate variability will combine is by using computer models of the global climate. They can do only so much to represent the full complexity of the global climate and hence may give only limited information about natural variability. Studies suggest that to date the variability in computer simulations is considerably smaller than in data obtained from the proxy records.9. According to paragraph 6, which of the following is true of computer models of the global climate?○ The information they produce is still limited.○ They are currently most useful in understanding past climatic behaviors.○ They allow researchers to interpret the data obtained from proxy records.○ They do not provide information about regional climates.Paragraph 7: In addition to the internal variability of the global climate system itself, there is the added factor of external influences, such as volcanoes and solar activity. There is a growing body of opinion thatconsiderable amount of the in the period from the seventeenth to the early twentieth10. The word in the passage is closest in meaning to○ records○ discussions○ results○ variations11. The in the passage is closest in meaning to○ demonstrated○ called upon○ supported○ expected12. What is the author's purpose in presenting the information in paragraph 7?○ To compare the influence of volcanoes and solar activity on climate variability with the influence of factors external to the global climate system○ To indicate that there are other types of influences on climate variability in addition to those previously discussed○ To explain how external influences on climate variability differ from internal influences○ To argue that the rapid warming of Earth in recent decades cannot be explainedParagraph 7: In addition to the internal variability of the global climate system itself, there is the added factor of external influences, such as volcanoes and solar activity. ■There is a growing body of opinion thatboth these physical variations have a measurable impact on the climate. ■Thus we need to be able to include these in our deliberations. ■Some current analyses conclude that volcanoes and solar activity explain quite a considerable amount of the observed variability in the period from the seventeenth to the early twentieth centuries, but that they cannot be invoked to explain the rapid warming in recent decades.■13. Look at the four squares [■] that indicate where the following sentence could be added to the passage.Indeed, the contribution of volcanoes and solar activity would more likely have been to actually reduce the rate of warming slightly.Where would the sentence best fit?14.Directions:An introductory sentence for a brief summary of the passage is provided below. Complete the summary by selecting the THREE answer that express the most important ideas in the passage. Some sentences do not belong in the summary because they express ideas that not presented in the passage or are minor ideas in the passage. This question is worth 2 points.A number of different and complex factors influence changes in the global climate over long periods of time.●●●Answer choices○ In the absence of instrumental records, proxy data allow scientists to infer information about past climates.○ Scientists see a consistent pattern in the global temperature variations that have occurred in the past.○ Computer models are used to estimate how the different causes of climate variability combine to account for the climate variability that occurs.○ Scientists have successfully separated natural climate variation from changes related to human activities.○ Scientists believe that activities outside the global climate system, such as volcanoes and solar activity may have significant effects on the system.○ Scientists have concluded that human activity accounts for the rapid global warming in recent decades.参考答案:1. ○22. ○33. ○44. ○25. ○16. ○37. ○38.○29. ○110. ○211. ○212. ○213. ○414. In the absence ofComputer models are Scientists believe that。
第一单元:Vocabulary Study:1.permanent2.had assembled3.discharging4.meekly5.apprentice6.partiality7.obscure8.exalted9.intruding 10cordially 11ambition 12.gallantlyCLOZE:BDCA B ADABB DBDAA CCDCB、T ranslation:1.His dream of becoming a footble star faded out as timewent by .2.A Boeiay/Boeing 747 aircraft didn’t gain enough height to climb the mountain.In a twinking ,it crash into the mountain and blew up .No one survied in the accident.3. Student have easy access to the resource in the libracy. so they are supposed to make the best of it .4.Titanic,the most luxurious ship in the world at the time .hit an icebery when she was under the way to the USA.Consequently, the ship sank into Atlantic Ocean and thousands of people died in this shipwreck.5.Every summer, all the tourists pour into this famous beach ,they packed like sardines on the beach to enjoy the sunshine.6.They have been to ST louis once and have a vague konwledge of its wonderful food ,but the day of their glory is over now ,t hey laspse into a humble silence and learn to disappear when the ruthless lindneer approches.第二单元:Vocabulary Study1.advocated2.extravagant3.vulnerable4.guru5.potential6.dispel7.shunned8.acclaimed9.enthusiasts10.stave off11.attendant 12.eventured CLOZE: CBDAD CDBAC BBCCD ABCA C.T ranslation:1.Some cyber gurus claim that internet will precent wars reduce pollution and combat various of inquality.2.Although,internet can undernible fosters communication, It will not put an end to wars . since wars are by no means cause b y the failure of different people to understand each other equlty.3.The internet can help reduce energy comsumption and pollution ,only if doing things online replace realworld activies.4.The poor are not shunning the internet because they cannot affort it the problem is that they lack the skills to exploit it efficiently, therefore, it make more sense to improve universal literacy than universal.5.Thanks to internet ,income inequality between people doing similar jods in different countries has been reduced. however, t he inequality between information works in poor countries and their porrest compatroits has been increased.6.If human nature remains stablely changed ,desipte the claim of technologist predict ,humanity cannot simply incent away its failing.第三单元:Vocabulary Study1.sank2.clear3.rainfall4.drought5.Flowering6.loose7.pores8.graze9.spine 10.trapped 11.venture 12.fertileCLOS E: CBDA B DBCCA A CDA C CADA BT ranslation:1.Many stuies indicate that thedesert air is so dry that it contains any moisture.2.Although the children form age 5 to 16 must go to school in Britain according to the laws ,about 1%of the children still ca n not read when they have primary school.3.After heared the news ,I knew i fell vestless again within a fornight .4.We think unanimouly to answer the question ,we must look more closely at the faces.5.Though out the world, goverment at all levels are taking effectively measures to prevent environment pollution.6.Some people think that the objective items ,such as multiple choices, should be used for an examation, others donot agree, because they believe that this kind test has some bad effect on students leaning.第四单元Vocabulary Study1.Physical2.accumulation3.diversity4.precipitated5.muscular6.pathological7.symptomatic8.vigorous9.psychologically 10.Anxiety 11.restored 12.refreshedCLOS E: A CDCB BCCBD CDCBA ADCDBT ranslation:1.I find that walking along the quiet lake can provide refreshment form a day is sedentary jod.2.Exercising and relexing youself is often prescribed as an effective cure for fatigue.3.The less active you are ,the faster aging process accelerates and the more vulnerable you became to physical and psychological problems.4.We are collecting money for repairs to the chur .if any of you would like to make a contribution ,we shall be most grateful.5.The scientist are currently focus on making experiment,in hope of finding effective methods to cure cancers.5.The speach that the chairman delivered at the conferences made much sense to regain the confidence of empolyees.第五单元Vocabulary Study1.uniform2.distinguish/recognize3.ascertained4.recognized5.unique6.outlet7.tactile8.rigid9.secure 10.acquisition 11.fruitful 12.foundationCLOS E :ADBBA DDCBA CCADB DCA BAT ranslation:1.Tt is generally accepted that the upbring of the childild with the home is closely related to the education of the child in school.2.Jennifer is the chief of personal for the NewYork Heracal Tribune ,where she is also responsible editcrical work in the fie d of public relations .3. It was two years ago that Jeff met Rose at his sister"s birthaday party ,they had been communited with each other by email since then the more Jeff know Rose ,the more he like her.4. The measure is effective not only in providing job opportunity for the laid-off works ,but also in limiting price increases.5Similarly , they inefficiency treated the political and economical back ground of the cinspiracy.6Tremendacs capital has met the needs of rapid economic growth on one hand and has caused the in flation on the o ther hand.第六单元Vocabulary Study1.As a matter of fact2.are plagued3.versus4.have ben reflecting on5.positive 6is implementing 7.will enroll 8.has been enriched 9.aspiration 10.academic 11.well-informed mitment toCLOS E:: BBA CB BCAAD CBABD BBCCBT ranslation:1.Passing the English examination should enhance your chances of getting the post.2.The discovery of god in the valley will enrish the mountain area.3. Only when police confronted her with evidence ,did she admited that she had stolen the money.4. The meting will afford you an opportunity of hearing good public speakers.5. An offical statement laid to rest the remain fears aout possible redundancies in the industry.6.MA RY IS pretty bright ,ASa matter of ,her teacher told me that she is certain to get a university place this year.第七单元:Vocabulary Study1.downsized2.dynamic3.yield4.had guaranteed5.inflict6.budget7.priority8.accelerating9.shirk 10.vitally 11.jiopardize 12.criteria CLOS E:CDBCA BCBBA DDA BA BCCDCT ranslation:1.Black people in this area complained to the goverment that they had been subjected to repeated racial attacks from the local police officers.2.The goverment officers are inclined to apply the science and technology to short -term projects ,which is benifical to scientific development.3.The precious manager did not want to invest time and money in training the employees who could leave the company any times,an a result of which ,there was a scrious talent drain.4.We are collecting money for repairs to the church ,if any of you would like to make a contribution ,we shall be most grateful .5.The scientist are currently focus on making expeiments,in hope of finding efctive methods to cure cancers.6.The speach that the chairman delivered at the conferences made much sense to regain the confidence of empolyees.第八单元Vocabulary Study1.acid2.shaded3.knock-on4.Drain5.banks6.spacing7.band8.needles9.filter 10.altitudes 11.variables anis mCLOS E: CACDA BAACD DBDCB ACDABT ranslation:1.Mexico city is one of the most popucated cities in the world.2.The finding shows that a substantial difference between the opinions of men and woman.3. I can’t r emember whether i left my credit card home or in the car .4.We cannot assume the suspects to be gulity simly because they are deliveed to remain silent.5.The main question that faces Chinese economists at present is how to use the price levers and the competition system to direct resources into areas which yield better returns.6.It is assumed in the next 20 years the most surprisiing development will take pace in the space flight ,but now space craft a being developed and they can be used many times instand of only one.第九单元:Vocabulary Study1applied 2repentance 3penetrated 4monkey-like 5contrast 6impressed 7passionate 8awakened 9succeeded 10blossoms 11expectant 12opportunityCLOS E: CDBAA DA BCC DBCBC DABBAT ranslation:1.On those days and nights when I was waiting for the results of the Entrance Examination,my heart was filled with wonder(or uncertainty).Iwondered what the fouture held for me,of surprise and excitement or disappointment and sorrow.2.The little girl ran swiftly to catch up with her mother and stretched out her hand to her mother,crying for more candies and drinks.But the mother persisted that they had bought encough.3.His wife was sent into the operation room.He walked to and fro outside,smoking one cigarette after another.When he saw the door opened,he felt a thrill of tenseness.4.Under the gaze of his colleagues,he flushed with embarrassment and bowwed hi head.He casually took up a newsp aper and hid hi face behind it,petending to be reading it.5.They fixed all their attention on the research project on hand.They believed that as long as they didn’t lose heart,they woule succeed in thesse experiments in time.6.She had just given birth to an infant when her husband left/deserted her.In despair,she shut herself and the infant in aroom,claiming that they would perish together.Thanks to the policemen who came in time,the mother and the daughter escaped death by a hair’s breath(or had a narrow escape).第十单元:Vocabulary Study:1.in his own right2.make good use of3.ready for4.went forth5.are looking forward to6.figure out7.assembled8.were diss olving9.withstand 10.controversy 11.an obstacle 12.resembleCLOS E:CABBD ADADA DDCCC BCADAT ranslation:1.we must figure out how to solve the tissue-rejecuion problem.2.his behavior under fire approved him a man of courage.3.In addition to the impressiveness of the settings,there is aue of the camera that at times seems magical.4.At first,no ready technical data were available,but we managed to go without.5.She has some difficulty in giving shape to her ideas.But she resembles her mother in the way she moves her hands when she talks.6.The most part of their designs corresponds to actual needs and regulations on environmental demand,the other part needs reconsidering.一个小男孩的梦想马克吐温1我小的时候,我们那密西西比河西岸的村镇上,玩伴们都只有一个水恒的志愿。
ContentsAbstract ..................................................................................................................... I Introduction .............................................................................................................. II Efficient Accounting Systems ................................................................................... 1 Chapter 1 Accounting .. (1)1.1 Tlie decision of accounting (2)1.2 Tlie fiinctions of accounting ........................................................................................ 4 Chapter 2 Accountingenvironment (7)2.1 The goal of accountant is the starting point in which accounting environmentaffects the accounting information system (9)2.2 Accounting assumed reveals close link between accounting and its interdependentexternal environment (10)2.3 Accounting standards disclose the request of accounting environment toaccounting infomiation ..................................................................................................... 13 Chapter 3 Accounting system ................................................................................ 15 ReferencesThanks ...... Conclusion ............................... (30)31 (32)Efficient Accounting SystemsThe existence and development of everything are under certain environmental conditions. Accounting, as one of the most important practice of human activities is not a cases outside. Accounting environment have the base sense for the smooth conduct of the activities of accounting, while various accounting environmental factors will have isolated impact on total activity of the accounting system. The study of the structure of the accounting environment system and the relations between the various elements in this system made us to be much more realistic in this area of accounting theoiy and practice, to build a harmonious Environmental System for clear direction, so as to promote the accounting cause of sustainable development.Chapter 1 Accounting1.1The decision of accountingAccounting is one of the fastest growing fields in the modern business world・Every new shore, school, restaurant, or filling station indeed, any new enterprise of any kind increases the demand for accountants・ Consequently, the demand for accountants is generally much greater than the supply. Government official often have a legal background: similarly, the men and women in management often have a background in accounting. They are usually familiar with the methodology of finance and fundamentals of fiscal and business administration・Today's accountants are as diverse as their job assignments・ Accountants may be male or female, outgoing or conservative, but they are all analytical. They mayhave backgrounds in art history or computer programming. They come from every ethnic and cultural background・The accounting backgrounds can open doors to most lines of business・ In short, accounting deals with all facets of an organization —purchasing, manufacturing, marketing, and distribution. This is why accounting provides such an excellent basis for business experience・ Accounting is an information system necessitated by the great complexity of modern business・1.2The functions of accountingOne of the most important functions of accounting is to accumulate and report financial information that shows ail organization^ financial position and the results of its operations to its interested users・ These users include managers, stockholders, banks and other creditors, governmental agencies, investment advisors, and the general public. For example, stockliolders must have an organization^ financial iiifonnation in order to measure its managementperfbnnance and to evaluate tlieii' own holdings・ Banks and other creditors must consider the financial strength of a business before permitting it to borrow funds. Potential investors need financial data in order to compare prospective investments. Also many laws require that extensive financial information be reported to the various levels of government. Businesses usually publish such reports at least annually. To meet the needs of the external users, a framework of accounting standards, principles and procedures known as ^generally accepted accounting principles^ have been developed to insure the relevance and reliability of the accounting information contained in these external financial reports. The subdivision of the accounting process that produces these external reports is referred to as financial accounting・Another important function of accounting is to provide the management inside an organization with the accounting iiifonnation needed in the organization^ internal decision-making, which relates to planning, control, and evaluation within an organization. For example, budgets are prepared under the directions of a company's controller on an annual basis and express the desires and goals of the company's management. A performance report is supplied to help a manager focus his attention on problems or opportunities that mightotherwise go unnoticed. Furthermore, cost-benefit data will be needed by a company^ management in deciding among the alternatives of reducing prices, increasing advertising, or doing both in attempt to maintain its market shares. The process of generating and analyzing such accounting information for internal decision -making is often referred to as managerial accounting and the related information reports being prepared are called internal management reports・ As contrasted with financial accounting, a management accounting information system provides both historical and estimated information that is relevant to the specific plans on more frequent basis・ And managerial accounting is not governed by generally accepted accounting principles.Chapter 2 Accounting environmentThe growth of organizations, changes in technology, government regulation, and the globalization of economy during the twentieth century have spurred the development of accounting・ As a result, a number of specialized fields of accounting have evolved in addition to financial accounting and managerial accounting, which include auditing, cost accounting, tax accounting, budgetary accounting, governmental and not -for-profit accounting, human resources accounting, environmental accounting, social accounting, international accounting, etc. For example, tax accounting encompasses the preparation of tax returns and the consideration of the tax consequences of proposed business transactions or alternative courses of action. Governmental and not-for-profit accounting specializes in recording and reporting the transactions of various governmental units and other not-for-profit organizations. International accounting is concerned with the special problems associated with the international trade of multinational business organizations. All forms of accounting, in the end, provide information to the related users and help them make decisions.Accountant the environment has, the development closely with accountant related, and decided that accountant the thought that the accounting theory, accountant organize, accountant the legal system as well as the accountancy level of development historic condition and the particular case・Studies accountant the environment the influence which develops to accountant,should take accountant the goat accountant suppose, the accounting standards as the clue.2.1 The goal of accountant is the starting point in which accounting environment affects the accounting information systemEach kind of accountant under the pattern accountant the goal concrete difference may sum up as accountant the environment different result. Looking from longitudinal, the different historical period, accountant the environment is different, accountant the goal is also different, from this causes the accounting information existence huge difference; Looking from crosswise, different national accountant the environment is different, accountant the goal content has the difference, its accounting iiifonnation is also unique. About accountant the goal, the theorists have “the policy-making usefill view” and "the management responsibility view" the stmggle・ What policy-making useful view interdependence is the developed capital market, the resources request and is entiusted with something the relations are establishes through the capital market. Thus, the resources entrusting party and is entrusted with something the side responsibility relations intermediary becomes because of the capital market fuzzy. But the responsibility view to base the resources request which forms in the direct intercourse with is entrusted with something the relations. Western various countries and the international accounting standards committee approve the policy-making useful view. If the international accounting standards committee said that “must focus the attention to provide to the economic decision-making useful iiifbnnation n. Comparatively speaking, the management responsibility view depends on each other accountant the environment and the Chinese present stage economic reform and the development actual situation even more tallies. The current our country financial inventory accountings essential target, should locate, in approaches the tmstee to report the fiduciary duty in the fulfillment situation. Because of from the time, the management responsibility view mainly faces the future, but faces in the past and the present. But in accountant confirmed that the standard and the measurement foundation s choices aspect, the foothold in the past and the present must be easier than in the future the foothold, provided the information quality even more drew close to the goal the request.Because just accountant the goal affects the accounting information systems basic reason, therefore, the environment embarks from accountant to accountant the goal locates,can cause the accounting theory to move toward the accounting practice from Yu the accounting practice.2.2accountants supposed has promulgated accountant between the external environment close relations which depended on each other with it.Accountant supposes is the accounting personnel the reasonable judgment which locates to the accounting the change which does not decide accountant who the environment makes, is the accounting basic premise Accountant supposes to financial inventory accounting has the overall importance influence, it is the behavior main body and the general situation embarks from accountant constructs the system info, American Accounting standards Committee Respective Accountant Research depaitment's first memoir is "accountant's fimdamental assumption^. Although theoretically speaking, the sound value information will have the guidance compared to the historical costs information regarding the user future economic decision-making, just like but US Chartered accountant the Association financial report Technical committee will publish the topic will be'Improvement Enterprise Reported that -- Customer Guidance" said that the numerous users did not advocate by the sound value pattern substitution historical costs pattern, its reason will mainly be stems from the guarantee financial report information consistency, reliable and the cost - benefit principle consideration. However, they advocate many kinds of measurement attribute mix valuation.Accountant supposes is based on the external environment uncertainty proposed that therefore, may say that accountant supposes is the accounting theory and accountant the environment connected border meeting point, depends on each other accountant with it the environment to have the extremely close relationship・2.3The accounting standards disclose accountant the environment to the accounting information requestAccountant the environment to accounting standards* influence, may manifest in the accounting standards technical nature, the social two aspects・1.Technical nature. The accountins standards were considered that is one kind of pure objective restraint organization, one merely technicals standard method, its goal is enables accounting practice processing the science, to be reasonable, to be consistent. Sinceproduces the behavior has universal restraint accountant after accountant the standard system, accountant reforms mainly displays in accountant the standard system s reform, but accountant standard system's reform, displays for concrete accountant the computing technique innovation.2・ Sociality. The different accounting standards will have the different accounting information, thus affects the different main body benefit, it will cause part of people to profit, but another part of people possibly suffer injury. The accounting standards produce the economic consequences prove its and impure objective. Accounting standards* sociality materially is the economic interest question, immediate influence to economic interest between related various aspects assignment. One of market economy's base elements is the fair competition ; All market economy participants cannot different form the rank difference because of the right status. If the administrative right trades the behavior with the market economy to unify in together, will destroy the market mechanism, will be unable to realize the market economy effective disposition resources function. Therefore accounting standards* formulation organization must be the neutral organization, guarantees the accounting standards fairness and the rationality.Chapter 3 Accounting SystemAccounting system refers to establish accounting and accounting supervision procedure and method of business activities. Effective accounting system should do:1.Confirmed and record all real business, timely and detailed description of economic business, so in the financial and accounting reports of economic business appropriately classified・2.Measurement value of economic business, so in the financial and accounting reports records in the appropriate monetary value・3.Determine the time, business to business records in the appropriate accounting period ・4.In the financial and accounting reports, business and proper disclosure of expression related matters・有效会计体系任何事物都是在一定的环境条件下存在和发展的,作为人类重要实践活动之一的会计活动也不例外。
外文原文:Fair Value Measurements1 In February 2006 the International Accounting Standards Board (IASB) and the US Financial Accounting Standards Board (FASB) published a Memorandum of Understanding reaffirming their commitment to the convergence of US generally accepted accounting principles (GAAP) and International Financial Reporting Standards (IFRSs) and to their shared objective of developing high quality, common accounting standards for use in the world’s capital markets. The convergence work programme set out in the Memorandum reflects the standard-setting context of the ‘roadmap’ developed by the US Securities and Exchange Commission in consultation with the IASB, FASB and European Commission for the removal of the reconciliation requirement for non-US companies that use IFRSs and are registered in the US. The work programme includes a project on measuring fair value.2 The FASB has recently issued Statement of Financial Accounting Standards No. 157 Fair Value Measurements (SFAS 157), on which work was well advanced before the Memorandum of Understanding was published. SFAS 157 establishes a single definition of fair value together with a framework for measuring fair value for US GAAP. The IASB recognised the need for guidance on measuring fair value in IFRSs and for increased convergence with US GAAP. Consequently, the IASB decided to use the FASB’s standard as the starting point for its deliberations. As the first stage of its project, the IASB is publishing in this discussion paper its preliminary views on the principal issues contained in SFAS 157.3 The IASB plans to hold round-table meetings on this discussion paper in conjunction with the development of an exposure draft. Please indicate in your response to this Invitation to Comment if you are interested in taking part in a round-table meeting. Please note that, because of timing and space constraints, not all of those indicating an interest may be able to take part.4 The IASB will consider responses to this Invitation to Comment and the related round-table discussions in developing an exposure draft of an IFRS on fair value measurement. The exposure draft will be prepared specifically for application to IFRSs. Although provisions of SFAS 157 may be used in the preparation of an exposure draft, they may be reworded or altered to be consistent with other IFRSs and to reflect the decisions of the IASB. The IASB plans to publish an exposure draft by early 2008.5 In November 2005 the IASB published for comment a discussion paper, Measurement Bases for Financial Accounting – Measurement on Initial Recognition, written by the staff of the Canadian Accounting Standards Board. Although that paper contained a discussion of fair value, its primary purpose was to discuss which measurement attributes were appropriate for initial recognition. That paper is part of the ongoing Conceptual Framework project that seeks to establish, among other things, a framework for measurement in financial reporting. Because of the different scope and intent of that paper, it is not discussed in this discussion paper. However, comments on that discussion paper relatingto the measurement of fair value will be considered in the development of the exposure draft of an IFRS on fair value measurement as well as in the Conceptual Framework project. Issue 1. SFAS 157 and fair value measurement guidance in current IFRSs6 IFRSs require some assets, liabilities and equity instruments to be measured at fair value in some circumstances. However, guidance on measuring fair value is dispersed throughout IFRSs and is not always consistent. The IASB believes that establishing a single source of guidance for all fair value measurements required by IFRSs will both simplify IFRSs and improve the quality of fair value information included in financial reports. A concise definition of fair value combined with consistent guidance that applies to all fair value measurements would more clearly communicate the objective of fair value measurement and eliminate the need for constituents to consider guidance dispersed throughout IFRSs.7 The IASB emphasises that the Fair Value Measurements project is not a means of expanding the use of fair value in financial reporting. Rather, the objective of the project is to codify, clarify and simplify existing guidance that is dispersed widely in IFRSs. However, in order to establish a single standard that provides uniform guidance for all fair value measurements required by IFRSs, amendments will need to be made to the existing guidance. As discussed further in Issue 2, the amendments might change how fair value is measured in some standards and how the requirements are interpreted and applied.8 In some IFRSs the IASB (or its predecessor body) consciously included measurement guidance that results in a measurement that is treated as if it were fair value even though the guidance is not consistent with the fair value measurement objective. For example, paragraph B16 of IFRS 3 Business Combinations provides guidance that is inconsistent with the fair value measurement objective for items acquired in a business combination such as tax assets, tax liabilities and net employee benefit assets or liabilities for defined benefit plans. Furthermore, some IFRSs contain measurement reliability criteria. For example, IAS 16 Property, Plant and Equipment permits the revaluation model to be used only if fair value can be measured reliably This project will not change any of that guidance. Rather, that guidance will be considered project by project. However, the IASB plans to use the Fair Value Measurements project to establish guidance where there currently is none, such as in IAS 17 Leases, as well as to eliminate inconsistent guidance that does not clearly articulate a single measurement objective.9 Because SFAS 157 establishes a single source of guidance and a single objective that can be applied to all fair value measurements, the IASB has reached the preliminary view that SFAS 157 is an improvement on the disparate guidance in IFRSs. However, as discussed in more detail below, the IASB has not reached preliminary views on all provisions of SFAS 157.Issue 2. Differences between the definitions of fair value in SFAS 157 and in IFRSs10 Paragraph 5 of SFAS 157 defines fair value as ‘the price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date.’Bycomparison, fair value is generally defined in IFRSs as ‘the amount for which an asset could be exchanged, or a liability settled, between knowledgeable, willing parties in an arm’s length transaction’ (withsome slight variations in wording in different standards). Thedefinition in SFAS 157 differs from the definitionin IFRSs in three important ways:(a)The definition in SFAS 157 is explicitly an exit (selling) price. Thedefinition in IFRSs is neither explicitly an exit price nor an entry (buying) price.(b)The definition in SFAS 157 explicitly refers to market participants. The definition in IFRSs refers to knowledgeable, willing parties in an arm’s length transaction.(c)For liabilities, the definition of fair value in SFAS 157 rests on the notion that the liability is transferred (the liability to the counterparty continues; it is not settled with the counterparty). The definition in IFRSs refers to the amount at which a liabilitycould be settled between knowledgeable, willing parties in an arm’s length transaction.11 These differences are discussed in more detail below.Issue 2A. Exit price measurement objective12 The Basis for Conclusions of SFAS 157 includes the following discussion:C26The transaction to sell the asset or transfer the liability is a hypothetical transaction at the measurement date, considered from the perspective of a market participant that holds the asset or owes the liability. Therefore, the objective of a fair value measurement is to determine the price that would be received for the asset or paid to transfer the liability at the measurement date, that is, an exit price. The Board [FASB] concluded that an exit price objective is appropriate because it embodies current expectations about the future inflows associated with the asset and the future outflows associated with the liability from the perspective of market participants. The emphasis on inflows and outflows is consistent with the definitions of assets and liabilities in FASB Concepts Statement No. 6, Elements of Financial INVITATION TO COMMENT Statements. Paragraph25 of Concepts Statement 6 defines assets in terms of future economic benefits (future inflows). Paragraph 35 of Concepts Statement 6 defines liabilities in terms of future sacrifices of economic benefits (future outflows).13 Paragraph 49 of the IASB’s Framework for the Preparation and Presentation of Financial Statements similarly defines assets and liabilities in terms of inflows and outflows of economic benefits. The majority of IASB members believe that a fair value measurement with an exit price objective is consistent with these definitions and is appropriate because it reflects current market-based expectations of flows of economic benefit into or out of the entity.14 Other IASB members agree with this view, but in their view an entry price also reflects current market-based expectations of flows of economic benefit into or out of the entity. Therefore, they suggest replacing the term ‘fair value’ with terms that are more descriptive of the measurement attribute, such as ‘current entry price’ or ‘current exit price’.15 An entry price measurement objective would differ from the exit price objective in SFAS 157 in that it would be defined as the price that would be paid to acquire an asset or received to assume a liability in an orderly transaction between market participants at the measurement date. Some members of the IASB are of the view that an entry price and an exit price would be the same amount in the same market, assuming that transaction costs are excluded. However, an entity might buy an asset or assume a liability in one market and sell that same asset or transfer that same liability (ie without modification or repackaging) in another market. In such circumstances, the exit price in SFAS 157 would be likely to differ from the entry price.16Some fair value measurements required by IFRSs might not be consistent with an exit price measurement objective. In particular, the IASB observes that this might be the case when fair value is required on initial recognition, such as in:(a)IFRS 3,(b)IAS 17 for the initial recognition of assets and liabilities by a lessee under a finance lease, and(c)IAS 39 Financial Instruments: Recognition and Measurement for the initial recognition of some financial assets and financial liabilities.17In developing an exposure draft, the IASB may propose a revised definition of fair value. If so, it will complete a standard-by-standard review of fair value measurements required in IFRSs to assess whether each standard’s intended measurement objective is consistent with the proposed definition. If the IASB concludes that the intended measurement objective in a particular standard is inconsistent with the proposed definition of fair value, either that standard will be excluded from the scope of the exposure draft or the intended measurement objective will be restated using a term other than fair value (such as ‘current entry value’). To assist in its review, the IASB would like to understand how the fair value measurement guidance in IFRSs is currently applied in practice. It therefore requests respondents to identify those fair value measurements in IFRSs for which practice differs from the fair value measurement objective in SFAS 157.Issue 2B. Market participant view18SFAS 157 emphasises that a fair value measurement is a market-basedmeasurement, not an entity-specific measurement. Therefore, a fairvalue measurement should be based on the assumptions that marketparticipants would use in pricing the asset or liability. Furthermore, evenwhen there is limited or no observable market activity, the objective ofthe fair value measurement remains the same: to determine the pricethat would be received to sell an asset or be paid to transfer a liability inan orderly transaction between market participants at the measurementdate, regardless of the entity’s intention or ability to sell the asset ortransfer the liability at that date.19Paragraph 10 of SFAS 157 defines market participants as buyers andsellers in the principal (or most advantageous) market for the asset orliability who are:(a)Independent of the reporting entity; that is, they are not related parties(b)Knowledgeable, having a reasonable understanding about the asset or liability and the transaction based on all available information, including information that might be obtained through due diligence efforts that are usual and customary(c)Able to transact for the asset or liability(d)Willing to transact for the asset or liability; that is, they are motivated but not forced or otherwise compelled to do so.20In comparison, the definition of fair value in IFRSs refers to‘knowledgeable, willing parties in an arm’s length transaction’.Paragraphs 42-44 of IAS 40 Investment Property provide a description of this concept:42The definition of fair value refers to ‘knowledgeable, willing parties’.In this context, ‘knowledgeable’ means that both the willing buyer and the willing seller are reasonably informed about the nature and characteristics of the investment property, its actual and potential uses, and market conditions at the balance sheet date. A willing buyer ismotivated, but not compelled, to buy. This buyer is neither over-eager nor determined to buy at any price. The assumed buyer would not pay a higher price than a market comprising knowledgeable, willing buyers and sellers would require.43A willing seller is neither an over-eager nor a forced seller, prepared to sell at any price, nor one prepared to hold out for a price not considered reasonable in current market conditions. The willing seller is motivated to sell the investment property at market terms for the best price obtainable. The factual circumstances of the actual investment property owner are not a part of this consideration because the willing seller is a hypothetical owner (ega willing seller would not take into account the particular tax circumstances of the actual investment property owner).44The definition of fair value refers to an arm’s length transaction.Anarm’s length transaction is one between parties that do not have a particular or special relationship that makes prices of transactions uncharacteristic of market conditions. The transaction is presumed to be between unrelated parties, each acting independently.21The IASB’s preliminary view is that the market participant view is generally consistent with the concepts of a knowledgeable, willing party in an arm’s length transaction that are currently contained in IFRSs. However, in the IASB’s view, the proposed definition more clearly articulates the market-based fair value measurement objective in IFRSs.中文译文:公允价值测量1 在 2006 年二月,国际会计准则委员会 (IASB) 和美国财务会计标准委员会 (FASB) 公布了再断言他们对美国公认会计原则 (GAAP) 和国际的金融报告标准 (IFRSs) 的集中承诺的一个备忘录和对他们的发展中高级质量的被分享的目的, 公共的为全球的使用资本市场的会计准则。
高三年级暑期英语练习2024.08第一部分听力(共两节,满分30分)第一节(共5小题;每小题1.5分,满分7.5分)听下面5段对话。
每段对话后有一个小题,从题中所给的A、B、C三个选项中选出最佳选项。
听完每段对话后,你都有10秒钟的时间来回答有关小题和阅读下一小题。
每段对话仅读一遍。
1. Where does this conversation probably take place?A. At a bus stop.B. At school.C. At home.2. What will the speakers do next?A. Order food.B. Ask for the menu.C. Leave the restaurant.3. Why does the man make the phone call?A. To cancel a visit.B. To make an appointment.C. To give some information.4. What did the speakers do last week?A. They stayed at a hotel.B. They moved their house.C. They made a special meal.5. What is probably the woman?A. A student.B. A teacher.C. A stay-at-home mother.第二节(共15题;每小题1.5分,满分22.5分)听下面5段对话或独白。
每段对话或独白后有几个小题,从题中所给的A、B、C三个选项中选出最佳选项。
听每段对话或独白前,你将有时间阅读各个小题,每小题5秒钟;听完后,各小题将给出5秒钟的做答时间。
每段对话或独白读两遍。
听第6段材料,回答第6、7题。
6. What does the girl hope to do at first?A. Have a talk with the man.B. Find an actress for the school play.C. Receive an invitation from the man.7. When is the school play?A. This Monday.B. Next week.C. Next month.听第7段材料,回答第8至10题。
4th Edition, 07.20053rd Edition dated 06.19942nd Edition dated 05.19901st Edition dated 09.19872005 Robert Bosch GmbHTable of ContentsIntroduction (5)1. Terms for Statistical Process Control (6)2. Planning .........................................................................................................................................................8 2.1 Selection of Product Characteristics .................................................................................................8 2.1.1 Test Variable ........................................................................................................................8 2.1.2 Controllability ......................................................................................................................9 2.2 Measuring Equipment .......................................................................................................................9 2.3 Machinery .........................................................................................................................................9 2.4 Types of Characteristics and Quality Control Charts ......................................................................10 2.5 Random Sample Size ......................................................................................................................11 2.6 Defining the Interval for Taking Random Samples (11)3. Determining Statistical Process Parameters ................................................................................................12 3.1 Trial Run .........................................................................................................................................12 3.2 Disturbances ....................................................................................................................................12 3.3 General Comments on Statistical Calculation Methods ..................................................................12 3.4 Process Average ..............................................................................................................................13 3.5 Process Variation . (14)4. Calculation of Control Limits ......................................................................................................................15 4.1 Process-Related Control Limits ......................................................................................................15 4.1.1 Natural Control Limits for Stable Processes ......................................................................16 4.1.1.1 Control Limits for Location Control Charts .........................................................16 4.1.1.2 Control Limits for Variation Control Charts ........................................................18 4.1.2 Calculating Control Limits for Processes with Systematic Changes in the Average .........19 4.2 Acceptance Control Chart (Tolerance-Related Control Limits) .....................................................20 4.3 Selection of the Control Chart .........................................................................................................21 4.4 Characteristics of the Different Types of Control Charts . (22)5. Preparation and Use of Control Charts ........................................................................................................23 5.1 Reaction Plan (Action Catalog) .......................................................................................................23 5.2 Preparation of the Control Chart .....................................................................................................23 5.3 Use of the Control Chart .................................................................................................................23 5.4 Evaluation and Control Criteria ......................................................................................................24 5.5 Which Comparisons Can be Made? (25)6. Quality Control, Documentation .................................................................................................................26 6.1 Evaluation .......................................................................................................................................26 6.2 Documentation .. (26)7. SPC with Discrete Characteristics ...............................................................................................................27 7.1 General ............................................................................................................................................27 7.2 Defect Tally Chart for 100% Testing . (27)8. Tables (28)9. Example of an Event Code for Mechanically Processed Parts ....................................................................29 9.1 Causes .............................................................................................................................................29 9.2 Action ..............................................................................................................................................29 9.3 Handling of the Parts/Goods ...........................................................................................................30 9.4 Action Catalog .. (30)10. Example of an x -s Control Chart (32)11. Literature (33)12. Symbols (34)Index (35)IntroductionStatistical Process Control (SPC) is a procedure for open or closed loop control of manufacturing processes, based on statistical methods.Random samples of parts are taken from the manufacturing process according to process-specific sampling rules. Their characteristics are measured and entered in control charts. This can be done with computer support. Statistical indicators are calculated from the measurements and used to assess the current status of the process. If necessary, the process is corrected with suitable actions.Statistical principles must be observed when taking random samples.The control chart method was developed by Walter Andrew Shewhart (1891-1967) in the 1920´s and described in detail in his book “Economic Control of Quality of Manufactured Product”, published in 1931.There are many publications and self-study programs on SPC. The procedures described in various publications sometimes differ significant-ly from RB procedures.SPC is used at RB in a common manner in all divisions. The procedure is defined in QSP0402 [1] in agreement with all business divisions and can be presented to customers as the Bosch approach.Current questions on use of SPC and related topics are discussed in the SPC work group. Results that are helpful for daily work and of general interest can be summarized and published as QA Information sheets. SPC is an application of inductive statistics. Not all parts have been measured, as would be the case for 100% testing. A small set of data, the random sample measurements, is used to estimate parameters of the entire population.In order to correctly interpret results, we have to know which mathematical model to use, where its limits are and to what extent it can be used for practical reasons, even if it differs from the real situation.We differentiate between discrete (countable) and continuous (measurable) characteristics. Control charts can be used for both types of characteristics.Statistical process control is based on the concept that many inputs can influence a process.The “5 M´s” – man, machine, material, milieu, method – are the primary groups of inputs. Each “M” can be subdivided, e.g. milieu in temperature, humidity, vibration, contamination, lighting, ....Despite careful process control, uncontrolled, random effects of several inputs cause deviation of actual characteristic values from their targets (usually the middle of the tolerance range).The random effects of several inputs ideally result in a normal distribution for the characteristic.Many situations can be well described with a normal distribution for SPC.A normal distribution is characterized with two parameters, the mean µ and the standard deviation σ.The graph of the density function of a normal distribution is the typical bell curve, with inflection points at σµ− and σµ+.In SPC, the parameters µ and σ of the population are estimated based on random sample measurements and these estimates are used to assess the current status of the process.1. Terms for Statistical Process ControlProcessA process is a series of activities and/or procedures that transform raw materials or pre-processed parts/components into an output product.The definition from the standard [3] is: “Set of interrelated or interacting activities which trans-forms inputs into outputs.”This booklet only refers to manufacturing or assembly processes.Stable processA stable process (process in a state of statistical control) is only subject to random influences (causes). Especially the location and variation of the process characteristic are stable over time (refer to [4])Capable processA process is capable when it is able to completely fulfill the specified requirements. Refer to [11] for determining capability indices. Shewhart quality control chartQuality control chart for monitoring a parameter of the probability distribution of a characteristic, in order to determine whether the parameter varies from a specified value.SPCSPC is a standard method for visualizing and controlling (open or closed loop) processes, based on measurements of random samples.The goal of SPC is to ensure that the planned process output is achieved and that corresponding customer requirements are fulfilled.SPC is always linked to (manual or software supported) use of a quality control chart (QCC). QCC´s are filled out with the goal of achieving, maintaining and improving stable and capable processes. This is done by recording process or product data, drawing conclusions from this data and reacting to undesirable data with appropriate actions.The following definitions are the same as or at least equivalent to those in [6].Limiting valueLower or upper limiting valueLower limiting valueLowest permissible value of a characteristic (lower specification limit LSL)Upper limiting valueHighest permissible value of a characteristic (upper specification limit USL)ToleranceUpper limiting value minus lower limiting value:LSLUSLT−=Tolerance rangeRange of permissible characteristic values between the lower and upper limiting valuesCenter point C of the tolerance rangeThe average of the lower and upper limiting values:2LSLUSL C +=Note: For characteristics with one-sided limits (only USL is specified), such as roughness (Rz), form and position (e.g. roundness, perpen-dicularity), it is not appropriate to assume 0=LSL and thus to set 2/USLC= (also refer to the first comment in Section 4.1.1.1).PopulationThe total of all units taken into considerationRandom sampleOne or more units taken from the population or from a sub-population (part of a population)Random sample size nThe number of units taken for the random sample Mean (arithmetic)The sum of theix measurements divided by the number of measurements n:∑=⋅=niixnx11Median of a sampleFor an odd number of samples put in order from the lowest to highest value, the value of the sample number (n+1)/2. For an even number of samples put in order from the lowest to highest value, normally the average of the two samples numbered n/2 and (n/2)+1. (also refer to [13])Example: For a sample of 5 parts put in order from the lowest to the highest value, the median is the middle value of the 5 values.Variance of a sampleThe sum of the squared deviations of the measurements from their arithmetic mean, divided by the number of samples minus 1:()∑=−⋅−=niixxns12211Standard deviation of a sampleThe square root of the variance:2ss=RangeThe largest individual value minus the smallest individual value:minmaxxxR−=2. PlanningPlanning according to the current edition of QSP 0402 “SPC”, which defines responsibilities. SPC control of a characteristic is one possibility for quality assurance during manufacturing and test engineering.2.1 Selection of Product CharacteristicsSpecification of SPC characteristics and their processes should be done as early as possible (e.g. by the simultaneous engineering team). They can also, for example, be an output of the FMEA.This should take• Function,• Reliability,• Safety,•Consequential costs of defects,•The degree of difficulty of the process,• Customer requests, and•Customer connection interfaces, etc.into account.The 7 W-questions can be helpful in specifying SPC characteristics (refer to “data collection” in “Elementary Quality Assurance Tools” [8]): Example of a simple procedure for inspection planning:Why do I need to know what, when, where and how exactly?How large is the risk if I don’t know this? Note: It may be necessary to add new SPC characteristics to a process already in operation. On the other hand, there can be reasons (e.g. change of a manufacturing method or intro-duction of 100% testing) for replacing existing SPC control with other actions.SPC characteristics can be product or process characteristics.Why?Which or what? Which number or how many?Where? Who?When?With what or how exactly?2.1.1 Test VariableDefinition of the “SPC characteristic”, direct or indirect test variable. Note: If a characteristic cannot be measured directly, then a substitute characteristic must be found that has a known relationship to it.2.1.2 ControllabilityThe process must be able to be influenced (controlled) with respect to the test variable. Normally manufacturing equipment can be directly controlled in a manner that changes the test variable in the desired way (small control loop). According to Section 1, “control” in the broadest sense can also be a change of tooling, machine repair or a quality meeting with a supplier to discuss quality assurance activities (large control loop).2.2 Measuring EquipmentDefinition and procurement or check of the measuring equipment for the test variable.Pay attention to:• Capability of measuring and test processes, • Objectiveness,• Display system (digital),• Handling. The suitability of a measurement process for the tested characteristic must be proven with a capability study per [12].In special cases, a measurement process with known uncertainty can be used (pay attention to [10] and [12]).Note: The units and reference value must correspond to the variables selected for the measurement process.2.3 MachineryBefore new or modified machinery is used, a machine capability study must be performed (refer to QSP0402 [1] and [11]). This also applies after major repairs.Short-term studies (e.g. machine capability studies) register and evaluate characteristics of products that were manufactured in one continuous production run. Long-term studies use product measurements from a longer period of time, representative of mass production. Note: The general definition of SPC (Section 1) does not presume capable machines. However, if the machines are not capable, then additional actions are necessary to ensure that the quality requirements for manufactured products are fulfilled.2.4 Types of Characteristics and Control Charts This booklet only deals with continuous anddiscrete characteristics. Refer to [6] for these andother types of characteristics.In measurement technology, physical variables are defined as continuous characteristics. Counted characteristics are special discrete characteristics. The value of the characteristic is called a “counted value”. For example, the number of “bad” parts (defective parts) resulting from testing with a limit gage is a counted value. The value of the characteristic (e.g. the number 17, if 17 defective parts were found) is called a “counted value”.SPC is performed with manually filled out form sheets (quality control charts) or on a computer.A control chart consists of a chart-like grid for entering numerical data from measured samples and a diagram to visualize the statistical indices for the process location and variation calculated from the data.If a characteristic can be measured, then a control chart for continuous characteristics must be used. Normally the sx− chart with sample size 5=n is used.2.5 Random Sample SizeThe appropriate random sample size is a compromise between process performance, desired accuracy of the selected control chart (type I and type II errors, operation characteristic) and the need for an acceptable amount of testing. Normally 5=n is selected. Smaller random samples should only be selected if absolutely necessary.2.6 Defining the Interval for Taking Random SamplesWhen a control chart triggers action, i.e. when the control limits are exceeded, the root cause must be determined as described in Section 5.4, reaction to the disturbance initiated with suitable actions (refer to the action catalog) and a decision made on what to do with the parts produced since the last random sample was taken. In order to limit the financial “damage” caused by potentially necessary sorting or rework, the random sample interval – the time between taking two random samples – should not be too long.The sampling interval must be individually determined for each process and must be modified if the process performance has permanently changed.It is not possible to derive or justify the sampling interval from the percentage of defects. A defect level well below 1% cannot be detected on a practical basis with random samples. A 100% test would be necessary, but this is not the goal of SPC. SPC is used to detect process changes.The following text lists a few examples of SPC criteria to be followed.1. After setup, elimination of disturbances orafter tooling changes or readjustment, measure continuously (100% or with randomsamples) until the process is correctly centered (the average of several measure-ments/medians!). The last measurements canbe used as the first random sample for furtherprocess monitoring (and entered in the control chart). 2. Random sample intervals for ongoingprocess control can be defined in the following manner, selecting the shortest interval appropriate for the process.Definition corresponding to the expected average frequency of disturbances (as determined in the trial run or as is knownfrom previous process experience).Approximately 10 random samples within this time period.Definition depending on specified preventivetooling changes or readjustment intervals.Approximately 3 random samples within thistime period.Specification of tooling changes or readjust-ment depending on SPC random samples.Approximately 5 random samples within theaverage tooling life or readjustment interval.But at least once for the production quantitythat can still be contained (e.g. delivery lot,transfer to the next process, defined lots forconnected production lines)!3. Take a final random sample at the end of aseries, before switching to a different producttype, in order to confirm process capabilityuntil the end of the series.Note: The test interval is defined based on quantities (or time periods) in a manner that detects process changes before defects are produced. More frequent testing is necessary for unstable processes.3. Determining Statistical Process Parameters3.1 Trial RunDefinition of control limits requires knowledge or estimation of process parameters. This is determined with a trial run with sampling size and interval as specified in Sections 2.5 and 2.6. For an adequate number of parts for initial calculations, take a representative number of unsorted parts, at least 25=m samples (with n = 5, for example), yielding no fewer than 125 measured values. It is important to assess the graphs of the measured values themselves, the means and the standard deviations. Their curves can often deliver information on process performance characteristics (e.g. trends, cyclical variations).3.2 DisturbancesIf non-random influences (disturbances) occur frequently during the trial run, then the process is not stable (not in control). The causes of the disturbances must be determined and elimi-nated before process control is implemented (repeat the trial run).3.3 General Comments on Statistical Calculation MethodsComplicated mathematical procedures are no longer a problem due to currently available statistics software, and use of these programs is of course allowed and widespread (also refer to QSP0402 [1]).The following procedures were originally developed for use with pocket calculators. They are typically included in statistics programs.Note: Currently available software programs allow use of methods for preparing, using and evaluation control charts that are better adapted to process-specific circumstances (e.g. process models) than is possible with manual calculation methods. However, this unavoidably requires better knowledge of statistical methods and use of statistics software. Personnel and training requirements must take this into account.Each business division and each plant should have a comprehensively trained SPC specialist as a contact person.Parameter µ is estimated by:Example (Section 10): samplesof number valuesx the of total mxx mj j===∑=1ˆµ3622562862662.......x ˆ=+++==µor:samplesof number mediansthe of total mxx m j j===∑=1~~ˆµ46225626363....x ~ˆ=+++==µIf µˆ significantly deviates from the center point C for a characteristic with two-sided limits, then this deviation should be corrected by adjusting the machine.Parameter σ is estimated by:Example (Section 10):a) ∑=⋅=m j j s m 121ˆσ41125552450550222.......ˆ=+++=σsamplesof number variancesthe of total =σˆNote: s =σˆ is calculated directly from 25 individual measurements taken from sequential random samples (pocket calculator).or b) na s=σˆ, where27125552450550.......s =+++=samplesof number deviationsdard tan s the of total ms s mj j==∑=1351940271...a s ˆn ===σnn a3 0.89 5 0.94 See Section 8, Table 1 7 0.96 for additional valuesor c) ndR =σˆ, with96225611....R =+++= samplesof number rangesthe of total mR R mj j==∑=1271332962...d R ˆn ===σn n d3 1.69 5 2.33 See Section 8, Table 1 7 2.70 for additional values Note: The use of table values n a and n d pre-supposes a normal distribution!Some of these calculation methods were originally developed to enable manual calculation using a pocket calculator. Formula a) is normally used in currently available statistics software.4. Calculation of Control Limits4.1 Process-Related Control LimitsThe control limits (lower control limit LCL andupper control limit UCL) are set such that 99% of all the values lie within the control limits in the case of a process which is only affected by random influences (random causes).If the control limits are exceeded, it must there-fore be assumed that systematic, non-random influences (non-random causes) are affecting the process.These effects must be corrected or eliminated by taking suitable action (e.g. adjustment).Relation between the variance (standard deviation x σ) of the single values (original values, individuals) and the variance (standard deviation x σ) of the mean values: nxx σσ=.4.1.1 Natural Control Limits for Stable Processes4.1.1.1 Control Limits for Location Control Charts (Shewhart Charts)For two-sided tolerances, the limits for controlling the mean must always be based on the center point C. Note: C is replaced by the process mean x =µˆ for processes where the center point C cannot be achieved or for characteristics with one-sided limits.* Do not use for moving calculation of indices!Note: Use of the median-R chart is onlyappropriate when charts are manually filled out, without computer support.n*A E C n c'E EE E3 1.68 1.02 1.16 2.93 1.73 5 1.230.59 1.20 3.09 1.337 1.020.44 1.21 3.19 1.18Estimated values µˆ and σˆ are calculated per Sections 3.4 and 3.5.Refer to Section 8 Table 2 for additional values.Comments on the average chart: For characteristics with one-sided limits (or in general for skewed distributions) and small n , the random sample averages are not necessarily normally distributed. It could be appropriate to use a Pearson chart in this case. This chart has the advantage compared to the Shewhart chart that the control limits are somewhat wider apart. However, it has the disadvantage that calculation of the control limits is more complicated, in actual practice only possible on the computer.Control charts with moving averagesThe x chart with a moving average is a special case of the x chart. For this chart, only single random samples are taken.n sample measurements are formally grouped as a random sample and the average of these n measurements is calculated as the mean.For each new measurement from a single random sample that is added to the group, the first measurement of the last group is deleted, yielding a new group of size n , for which the new average is calculated.Of course, moving averages calculated in this manner are not mutually independent. That is why this chart has a delayed reaction to sudden process changes. The control limits correspond to those for “normal” average charts:σˆn .C LCL ⋅−=582 σˆn.C UCL ⋅+=582Calculate σˆ according to Section 3.5 a)Control limits for )3(1=n :σˆ.C LCL ⋅−=51 σˆ.C UCL ⋅+=51Example for )3(1=n :3 74 741.x = 3 7 4 9 762.x = 3 7 4 9 2 053.x = 3 7 4 9 2 8 364.x =This approach for moving sample measurements can also be applied to the variation, so that an s x − chart with a moving average and moving standard deviation can be used.After intervention in the process or process changes, previously obtained measurements may no longer be used to calculate moving indices.4.1.1.2 Control Limits for Variation Control ChartsThe control limits to monitor the variation (depending on n ) relate to σˆ and s and like-wise R (= “Central line”).s charta) generally applicable formula(also for the moving s x − chart)Example (Section 10):σˆB UCL 'Eob⋅= 62351931...UCL =⋅=σˆB LCL 'Eun ⋅= 30351230...LCL =⋅=b) for standard s x − chartNote: Formula a) must be used in the case ofmoving s calculation. Calculation of σˆ per Section 3.5 a).s B UCL *Eob ⋅= 62271052...UCL =⋅=s B LCL *Eun ⋅=30271240...LCL =⋅=R chartR D UCL Eob ⋅=2696212...UCL =⋅=R D LCL Eun ⋅=70962240...LCL =⋅=Tablen 'Eun B 'Eob B *Eun B *Eob B Eun D Eob D3 5 70.07 0.23 0.34 2.30 1.93 1.76 0.08 0.24 0.35 2.60 2.05 1.88 0.08 0.24 0.34 2.61 2.10 1.91See Section 8, Table 2 for further values4.1.2 Calculating Control Limits for Processes with Systematic Changes in the AverageIf changes of the mean need to be considered as a process-specific feature (trend, lot steps, etc.) and it is not economical to prevent such changes of the mean, then it is necessary to extend the “natural control limits”. The procedure for calculating an average chart with extended control limits is shown below.The overall variation consists of both the “inner” variation (refer to Section 3.5) of the random samples and of the “outer” variation between the random samples.Calculation procedure Control limits for the mean。
a r X i v :0705.0601v 1 [c o n d -m a t .s t a t -m e c h ] 4 M a y 2007CUPhys/12/2006Floating Phase in 2D ANNNI ModelAnjan Kumar Chandra and Subinay DasguptaDepartment of Physics,University of Calcutta,92Acharya Prafulla Chandra Road,Calcutta 700009,India.(Dated:February 1,2008)We investigate whether the floating phase (where the correlation length is infinite and the spin-spin correlation decays algebraically with distance)exists in the temperature(T )-frustration parameter (κ)phase diagram of 2D ANNNI model.To identify this phase,we look for the region where (i)finite size effect is prominent and (ii)some relevant physical quantity changes somewhat sharply and this change becomes sharper as the system size increases.For κ<0.5,the low temperature phase is ferromagnetic and we study energy and magnetization.For κ>0.5,the low temperature phase is antiphase and we study energy,layer magnetization,length of domain walls running along the direction of frustration,number of domain-intercepts that are of length 2along the direction of frustration,and the number of domain walls that do not touch the upper and/or lower boundary.In agreement with some previous studies,our final conclusion is that,the floating phase exists,if at all,only along a line.PACS numbers:05.70.Jk,05.10.Ln,64.60.FrI.INTRODUCTIONThe two-dimensional Axial Next-Nearest Neighbor Ising (ANNNI)model (spin =1FerroAntiphaseParamagneticκ++++++0.50++ −−++ −−1.0F l o a ti n gTFIG.1:Schematic phase diagram of the two-dimensional ANNNI model according to previous studies.gion extending from κ<0.5to κ>0.5.There is thus a controversy about the existence of algebraically decaying phase in 2D ANNNI model.In this article,we shall present Monte Carlo simulation of 2D ANNNI model with a view to locating the floating phase,if any.For κ<0.5,one can easily identify the magnetization as the order parameter and for this case we have therefore,measured 1.internal energy 2.magnetization.For κ>0.5,the ordered phase is “antiphase”and it is difficult to identify the order parameter unambiguously.In this case,we have measured 1.internal energy,yer magnetization (magnetization perpendicular to the direction of frustration)3.length of domain walls running along the direction of frustration4.number of domain-intercepts that are of length 2over a straight line along the direction of frustration5.number of dislocations measured as the number of domain walls that do not touch the upper and/or lower boundary.We shall discuss later (Sec.III)the significance of these quantities in the context of our work.From the measurement of a suitable physical quantity Q (t )at a time t ,the critical point (or,for that matter,the critical region)could be identified from the general principle that at the critical point the quantity Q (t )−Q (∞)is expected to vanish algebraically as a function of time t .While this characteristic is handy for the case Q (∞)=0,it is not usable when Q (∞)=0,as very large time simulation is required to measure the quantity Q (∞)itself.In such cases,we have utilized two essentialfeatures of critical phenomena :(i)For a finite lattice,the correlation time will diverge as [14]τ∼L z .Hence,the quantity Q (t )will depend strongly on the system size only at the critical temperature T c even at finite values of t .(ii)the equilibrium value Q (∞)will undergo a sudden change,which is detectable even for small size and becomes more and more drastic as the system size increases.As mentioned above,the results from non-equilibrium relaxation study [5]and the density matrix renormaliza-tion group analysis [6]contradicts the previous studies [5,9,15]as regards the extent of the floating phase.This paper confirms the conclusion of the former two studies by Monte Carlo simulation.One should note that all these studies agree at sufficiently low temperatures.We measure some observables that play a crucial role in the underlying physics and that have not been analyzed till now.In Sections II and III we shall present the simulation studies for κ<0.5and κ>0.5respectively.All the simu-lation studies were performed with sequential Metropolis algorithm using periodic boundary conditions in X and Y directions and the results were averaged over 10to 50realizations.In Section IV we shall study the corre-spondence between the 2D ANNNI model and the trans-verse ANNNI chain and in Section V present conclusions.Our final conclusion is that the divergent correlation time exists only along a line.The phase diagram obtained is presented in Fig.2.Everywhere in this communica-tion temperature is measured in unit of T (0)c =0.44069,the critical temperature for nearest-neighbor interaction (κ=0).This diagram is in qualitative agreement with that obtained by Shirahata and Nakamura [5]and De-rian,Gendiar and Nishino [6].The small difference be-tween our results and those obtained by these authors seems to be due to the small (∼1000×1000)size of our simulation,compared to that of Shirahata and Nakamura (∼6399×6400).II.SIMULATION STUDIES FOR κ<0.5A.EnergyBefore we consider the measurements on the 2D ANNNI model itself,let us break offfor a discussion on critical behavior of energy relaxation in general.Internal energy E is always unambiguously defined,in contrast to the order parameter,which for some phase transitions (like our system for κ>0.5)may not be easy to identify and measure.However,since E (∞)=0,it is difficult to study the time variation of the quantity E (t )−E (∞),as mentioned above.The relaxation of energy has also been studied elsewhere [16,17].We shall now consider the case of κ=0(only nearest-neighbor interaction),and obtain the exponent,following0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.4 0.6 0.8 1 1.2TκAntiphaseParamagneticFerro magneticobserved DMRG NERFIG.2:The phase diagram of the two-dimensional ANNNI model as obtained from the present study,from non-linear re-laxation (NER)[5]and density matrix renormalization group(DMRG)[6].The temperature is measured in units of T (0)c =0.44069as mentioned in the text.standard scaling arguments [16,18,19].Starting from the standard diffusion equation for the (non-conserved)order parameter ψ∂ψ∂ψone obtains∂FΓ ∂ψ∂t=−1∂β∂ψ2.92.92 2.94 2.96 2.98 33.023.04 3.06 3.0850000100000150000200000250000300000350000e n e r g ytime0.40,7000.404,7000.41,7000.42,7000.404,10000.404,1200κ = 0.450.405,700FIG.4:Energy relaxation for κ=0.45at temperature T for L ×L square lattice.The numbers at the right margin indicate T and L values.It is important to note that for T =0.40,0.405,0.41and 0.42,the curves for L =1000and 1200coincide with that of L =700.0.00010.0010.010.11100100010000 100000 1e+06∆Etime0.4040.4030.41κ = 0.45FIG.5:Energy relaxation for κ=0.45for 700×700lattice.The numbers indicate temperature.Note that the linear (al-gebraic)region is most prominent for T =0.404.The alge-braic region fits to 0.17t −0.15.We could not furnish the curve for T =0.405as the system takes too long time to equilibrate.cays asM ∼t −σ′(6)where σ′=β/νz .For κ=0,the value of σ′is 0.05734.At κ=0.45the magnetization is found to relax alge-braically only around T =0.405±0.002,which therefore is the critical temperature (Fig.6).That the critical temperature at κ=0.45lies between 0.40and 0.41is also verified by the fact that there is a sudden change in the equilibrium value of magnetization as tempera-ture increases from 0.40to 0.41,and that this change be-comes more and more sudden as the lattice size increases (Fig.7).The exponent for magnetization decay turns out to be σ′=0.02±0.005at κ=0.45,T =0.404.As for the case1100 1000 10000 100000 1e+06Mt0.4030.4040.405κ = 0.45FIG.6:Relaxation of magnetization at κ=0.45at for 1000×1000lattice.The linear (algebraic)region is most prominent for T =0.404and fits to 1.08t −0.017.0 0.1 0.20.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.30.35 0.40.45 0.5 0.55MTL=100L=1000κ = 0.45FIG.7:Equilibrium value of magnetization at κ=0.45as a function of temperature.of energy relaxation,it is a computationally intensive job to determine σ′accurately.Approximate measurements indicate that just like σ,the exponent σ′also remains more or less the same up to κ=0.4and starts decreas-ing markedly at higher κ.Further investigations on the apparent variation of σand σ′is in progress.III.SIMULATION STUDIES FOR κ>0.5In this case,it is difficult to identify the order pa-rameter,as mentioned above.It can be easily proved [1,2,3]that,at low temperature the system is in a per-fectly ordered state with like spins along the Y axis and ++--pattern repeated along the X direction.The do-main walls thus run exactly parallel to the Y axis.Mov-ing along the X (Y)axis,one finds domains of length 2(L ),for an L ×L system.As one increases the tempera-ture,after some temperature T c (called the lower criticaltemperature),domains of length larger than2appear along the X direction.The domain walls now run not always parallel to the Y axis.They start from the lower boundary and terminate at the upper,but they often take small steps parallel to the X axis.Along the Y axis the domains are now sometimes less than L in length. Villain and Bak[4]pointed out that the number of do-mains that do not touch both boundaries is crucial and represents some sort of“dislocation”.This number is al-most zero immediately above T c but suddenly increases at some temperature T2.While T c marks a second or-der commensurate-incommensurate(Pokrovsky-Talapov type)transition,T2marks a Kosterlitz-Thouless type transition.For T>T2domains of small size appear in X and Y direction and the system is in a paramagnetic state.For T c<T<T2,Villain and Bak claimed that, the wavelength of modulation changes continuously with temperature and the correlation length is infinity,leading to a spin-spin correlation that decays algebraically with distance.To study the two transitions,one at T=T c and the other at T=T2,we measure several quantities always starting the simulation with antiphase as the initial con-figuration.(1)Internal energy is studied in the same way as done forκ<0.5.(2)Layer magnetization is defined(following[5])as the magnetization of a chain along the Y axis averaged over all such chains:m l=1LL y=1s x,yObviously,this quantity should be1for T<T c and zero above T2.Hence,the size dependence of the relaxation of m l and the algebraic nature of relaxation of m l would indicate a diverging correlation length.Shirahata and Nakamura[5]have studied this quantity to identify the upper transition temperature T2atκ=0.60and0.80.(3)The sum of length of the segments of domain walls that are parallel to the X axis,divided by the system size gives a quantity(say,d x)which is strictly zero in the perfect antiphase,but increases suddenly to some non-zero value at T c.Its measurement leads to an estimate of T c.(4)Moving in the X-direction,one may note the lengths of domain intercepts encountered,and compute the ratiof2=n2/n twhere n2is the number of domains of length2and n t is the total number of domains.This ratio is1for T<T c and decreases for higher T.We note the region over which the relaxation of f2depends on size or is algebraic in nature.(5)The spin domains are identified and the fraction(f d)of domains that do not touch the upper and/or lower boundary is counted.(Those which miss any one bound-ary is counted with weightage1and those which miss both the boundaries are given weightage2.)This frac-tion measures the number of“dislocations”that drives the Kosterlitz-Thouless phase transition at T2.We do not study the relaxation of this quantity for computa-tional limitations,but obtain from simulation the equi-librium value away from T c.Such measurement should lead to an estimation of the Kosterlitz-Thouless transi-tion temperature T2,if any.The study of all these quantities leads us to the con-clusion that thefloating phase,i.e.the region between T c and T2extends,if at all,over a temperature range less than0.02.Thus,the region of diverging correlation length exists only along a line,up to the accuracy of this study.Our study was performed atκ=0.55,0.60, 1.0.The resulting phase diagram is presented in(Fig.2). However,it is interesting to observe that while the width of the critical region is0.02forκ>0.5,it is much less, about0.001,forκ<0.5.A.EnergyThe study of energy relaxation forκ>0.5follows closely the procedure forκ<0.5,with the only differ-ence that the initial configuration is now the antiphase. Forκ=0.60,the energy relaxation depends on size pre-dominantly at T=0.43and0.44(Fig.8)and for L= 700,the energy difference E(t)−E(∞)shows an alge-braic decay over an extended region of time at T=0.43 and0.44(Fig.9).Therefore,atκ=0.60we identify T c as0.44±0.01.The curves forκ=0.55are qualitatively similar to that forκ=0.60,and T c could be identified as 0.37±0.01.An alternative interpretation of the results could be that thefloating phase would exist,if at all,be-tween T=0.43to0.44(0.37to0.38)atκ=0.60(0.55). That the energy does not show any critical behavior over an extended range of temperature,seems to indicate that thefloating phase does not exist over an extended region. We mention that we could not perform the study of energy relaxation atκ=1,since for this case a reliable data needs averaging over too many configurations. The exponent for energy relaxationσ′(see Eq.(5))is found to be1.7±0.1forκ=0.55and0.60.In contrast to thefindings forκ<0.5,we observe no marked variation ofσ′withκ.yer MagnetizationThe relaxation of layer magnetization(m l)is qualita-tively similar forκ=0.55,0.60and1.00.Atκ=0.602.9533.053.1 3.153.20 50000 100000 150000 200000 250000 300000 350000e n e r g ytime0.42,7000.43,7000.44,7000.45,7000.43,10000.43,12000.44,1200κ = 0.60FIG.8:Energy relaxation for κ=0.60at temperature T for L ×L square lattice.The numbers at the right margin indicate T and L values.For T =0.42and 0.45,the curves for L =1000and 1200coincide with that of L =700.0.010.11100010000100000e n e r g y d if f e r e n c etime0.440.430.45L = 700κ = 0.60FIG.9:Energy relaxation for κ=0.60for 700×700lattice.The numbers indicate temperature.Note that the linear (al-gebraic)region is most prominent for T =0.44and fits to 75×106t −1.9.the relaxation shows critical slowing down and finite size effect at T =0.44±0.01(Fig.10),which is therefore the value of T 2.The equilibrium value of layer magnetiza-tion also shows a sharp fall (that becomes sharper as the system size increases)at this temperature,at κ=0.60(Fig.11).For κ=0.55the value of T 2can be estimated in a similar manner to be 0.37±0.01.We could not observe the curve for equilibrium value of layer magneti-zation at κ=1because for this case one needs too long simulation to get the equilibrium value.C.Length of domain walls parallel to the directionof frustrationLike layer magnetisation,this quantity (d x )shows crit-ical slowing down and finite size effect at T =0.44for0.30.4 0.5 0.60.7 0.8 0.91 0 50000 100000150000 200000 250000m ltime0.43,5000.43,10000.44,5000.44,10000.45,5000.45,1000κ = 0.60FIG.10:Relaxation of layer magnetization for κ=0.60at temperature T for L ×L square lattice.The numbers at the right margin indicate T and L values.0.1 0.20.30.4 0.5 0.6 0.7 0.8 0.91 0.40.450.50.550.60.65m lTL=100L=500L=1000κ = 0.60FIG.11:Equilibrium value of layer magnetization at κ=0.60as a function of temperature.κ=0.60(Fig.12).The equilibrium value of d x shows a sharp rise at this temperature (Fig.13),and this rise becomes sharper as the system size increases.Hence,the study of d x indicates that T c is 0.44±0.01for κ=0.60.In the same manner,the value of T c is estimated to be 0.37±0.01for κ=0.55and 0.76±0.01for κ=1.00.D.Fraction of domains that have length 2A study of this quantity (f 2)leads to T c =0.44±0.01for κ=0.60,since size-dependent slowing down of the re-laxation of f 2is observed at this temperature (Fig.14).Moreover,a sharp fall of the equilibrium value of f 2is ob-served at this temperature and this fall becomes slightly sharper as the system size increases from 100to 1000(Fig.15).In a similar manner,the critical temperature for κ=0.55and 1.00is obtained as 0.37±0.01and 0.76±0.01respectively.0 0.0050.01 0.015 0.02 0.025 0.03 0.035 0.04 050000100000150000200000250000d xtime0.420.43,5000.44,5000.45,5000.43,10000.44,10000.45,1000κ = 0.60FIG.12:Relaxation of d x for κ=0.60at temperature T for L ×L square lattice.The numbers at the right margin indicate T and L values.For T =0.42,the curves for L =500and 1000coincide.0 0.01 0.02 0.030.04 0.05 0.06 0.070.350.40.45 0.50.55d xTL=100L=1000κ = 0.60FIG.13:Equilibrium value of d x at κ=0.60as a function of temperature.0.70.75 0.8 0.85 0.90.95 1 1.05200004000060000 80000 100000 120000 140000f 2time0.43,5000.43,10000.44,10000.45,5000.45,1000κ = 0.60FIG.14:Relaxation of f 2for κ=0.60at temperature T for L ×L square lattice.The numbers at the right margin indicate T and L values.0.4 0.5 0.6 0.7 0.80.9 1 0.30.35 0.4 0.45 0.50.55 0.6 0.65 0.7 0.75f 2TL=100L=1000κ = 0.60FIG.15:Equilibrium value of f 2at κ=0.60as a function of temperature.0 0.20.40.6 0.810.350.40.450.5 0.550.60.65f dTκ = 0.60L=100L=500L=1000FIG.16:Equilibrium value of f d at κ=0.60as a function of temperature.E.Fraction of domains that do not touch theboundary (f d )We could not study the relaxation of this quantity since averaging over too many realisations is necessary for a reasonably smooth curve.Rather,we could measure the equilibrium value at temperatures where the relaxation was not prohibitively slow.It is found that for κ=0.60,the critical temperature lies between 0.42and 0.45,since a sudden change is observed in the equilibrium value of f d in this temperature range and that this change be-comes more and more sudden as the lattice size increases (Fig.16).Similar behaviour is also observed for κ=0.55between temperature 0.35and 0.40.This study could not be done for κ=1.0because of computational limitation (one has to average over a good number of realisations to get reliable data).Inspite of the computational difficul-ties for the study of f d ,it is clear that the study of this quantity excludes the possibility of critical region with width larger than 0.02along temperature axis.Paramagneticκ0.51.0Ferro++++++Antiphase++ −−++ −−tan 12Γ/JΓF lo at i n g π/4Γ21FIG.17:Schematic phase diagram for Transverse ANNNImodel (with Hamiltonian H q of Eq.(8)),after ref.[13]IV.MAPPING TO QUANTUM MODELA quantum Ising model in d dimension is related to a classical Ising model in d +1dimension by Suzuki-Trotter transformation [7,9,10]and this relation is the basic idea behind quantum Monte Carlo algorithm.This transfor-mation for the ANNNI model has been discussed in detail by Arizmendi [8]and can be summarised as follows.The quantum Ising Hamiltonian H q for the one-dimensional transverse ANNNI model with N sites is given by,H q =−J′N j =1(s z j s zj +1−κs z j s zj +2)−ΓN j =1s x j .(8)The ground state for this model is equivalent to a classical Ising model in two dimension with Hamiltonian H cl =−Nx =1mn y =1J q s x,y [(s x +1,y −κs x +2,y )+ps x,y +1](9)in the limit (m →∞)and (n →∞)at a temperature T qwherek B T qJ ′(10)(k B is the Boltzmann constant)andp =m ΓT 00.4040.550.441.0AcknowledgmentsThe work of one author(AKC)was supported byUGC fellowship.We also acknowledge thefinancialsupport from DST-FIST for computational facility.[1]W.Selke,Phys.Rep.170,213(1988);W.Selke,inPhase Transitions and Critical Phenomena edited byC.Domb and J.L.Lebowitz,Vol.15,p.1-72(AcademicPress1992).[2]J.Yeomans,in Solid State Physics edited by H.Ehren-rich and D.Turnbull,Vol.41,p.151-200(Academic Press1988).[3]R.Liebmann,Statistical Mechanics of Periodic Frus-trated Ising Systems(Springer-Verlag,Berlin)1986[4]J.Villain and P.Bak,J.de Phys(Paris)42,657((1981).[5]T.Shirahata and T.Nakamura,Phys.Rev.B65,024402(2001).[6]R.Derian, A.Gendiar and T.Nishino,cond-mat/0605411.[7]M.Suzuki,in Quantum Monte Carlo Methods ed.M.Suzuki,(Springer-Verlag,Berlin,Heidelberg)1987. [8]C.M.Arizmendi,A.H.Rizzo,L.N.Epele and C.A.Gar-cia Canal,Z.Phys.B83,273(1991).[9]B.K.Chakrabarti,A.Dutta and P.Sen,Quantum IsingPhases and Transitions in Transverse Ising Models (Springer-Verlag,Berlin,Heidelberg)1996.[10]B.K.Chakrabarti and A.Das,Quantum Annealing andRelated Optimization Methods ed. B.K.Chakrabarti and A.Das(Springer-Verlag,Berlin,Heidelberg)2005 (cond-mat/0312611).[11]G.Uimin and H.Rieger,Z.Phys.B101,597(1996).[12]A.Dutta and D.Sen,Phys.Rev.B67,094435(2003).[13]A.K.Chandra and S.Dasgupta,arXiv:cond-mat/0612144,Phys.Rev.E75,021105 (2007).[14]M.E.Fisher,Phys.Rev.B13,5039(1976);K.Binderand D.W.Heermann,Monte Carlo Simulations in Statistical Physics(Fourth Edition)(Springer-Verlag, Berlin,Heidelberg)2002,p.55; ndau and K.Binder,A Guide to Monte Carlo Simulations in Sta-tistical Physics,(Springer-Verlag,Berlin,Heidelberg), 200?,Chapters2,4.[15]A.Sato and F.Matsubara,Phys.Rev.B60,10316(1999).[16]A.D.Rutenberg and A.J.Bray,Phys.Rev.B51,5499(1995).[17]S.Wansleben and ndau,Phys.Rev.B43,6006(1991).[18]S.K.Ma,Modern Theory of Critical Phenomena(Ben-jamin,New York)1976;M.E.Fisher and Z.Racz,Phys.Rev.B13,5039(1976).[19]H.K.Janssen,B.Schaub,and B.Schmittmann,Z.Phys.B.73,539(1989);B.Zheng,Int.J.Mod.Phys.12,1419(1998);P.Sen and S.Dasgupta,J.Phys.A35,2755 (2002).[20]D. C.Mattis,The Theory of Magnetism,Vol.II(Springer-Verlag,Berlin,Heidelberg)1985,Sec.3.6. [21]R.J.Baxter,Exactly Solved Models in Statistical Mechan-ics(Academic Press,London)1982,p.266.。
ontrastiveAnalys...A Comparison between CAH and EA TheoryBoth CAH and EA belong to the role of the native language in early ages. Although EA is the development of CAH, they are different from each other.Contrastive Analysis is a way of comparing languages in order to determine potential errors for the ultimate purpose of isolating what needs to be learned and what does not need to be learned in a L2 learning situation. To Error analysis, it is a type of linguistic analysis that focuses on the errors learners make.The theoretical foundations for what became known as the Contrastive Analysis Hypothesis were formulated in Lado's Linguistics Across Cultures in 1957. Error analysis in SLA was established in the 1960s by Stephen Pit Corder and colleagues.Contrastive Analysis was used extensively in the field of Second Language Acquisition (SLA) in the 1960s and early 1970s to predict areas that will be either easy or difficult for learners in compiling pedagogical materials. It is based on a theory of language that claims that language is habit and that language learning involves the establishment of a new set of habits. It is said that one can account for errors by considering differences between the L1 and the L2.That is to say, the greater the differences, the more errors will occur, so what one has to do in learning L2 is learn the differences. There are two positions that developed with regard to CAH framework. They were variously known as the strong form vs. the weak form, the predictive form vs. the explanatory form, and the a priori form vs. the a posteriori form.Error analysis was an alternative to contrastive analysis. Insecond language acquisition, Error Analysis studies the types and causes of language errors. an approach influenced by behaviorism through which applied linguists sought to use the formal distinctions between the learners' first and second languages to predict errors. Errors are regarded as evidence of the sta te of a learner’s knowledge of the L2. There are a number of steps taken in conducting an error analysis. They are Collect data, Identify errors, Classify errors, Quantify errors, Analyze sources, and Remediate. EA provides a broader range of possible explanations than CA to account for errors. One needs to consider non-errors as well as errors to get the entire picture of a learner’s linguistic behavior.Both of the two theories have their advantages and disadvantages.Historically, Lado’s hypothesis inspi red a generation of L2 researchers to conduct linguistic field work, that is, to check hypothetical contrastive analysis statements against the actual speech of language learners. But, many errors predicted by Contrastive Analysis were inexplicably not observed in learners' language. Contrastive Analysis could not predict all learning difficulties, but was certainly useful in the retrospective explanation of errors. It is an oversimplification to think that comparing two languages is a straightforward comparison of structures.As to EA, it is important in the recognition that learners were more than passive hiccupers of NL forms and functions. Error analysis showed that contrastive analysis was unable to predict a great majority of errors, although its more valuable aspects have been incorporated into the study of language transfer. But, it falls short in the analysis of L2 data in that it only sees a partial pictureof what a learner produces of the L2. So it’s impossible to appreciate the complexity of the learning situation by studying one limited part of it.小贴士:夏季养生常识立夏已过,炎热的夏季来了。
Simulating Topographic Effects on Spaceborne Radiometric Observations BetweenL and X Frequency BandsNazzareno Pierdicca,Member,IEEE,Luca Pulvirenti,Member,IEEE,andFrank Silvio Marzano,Senior Member,IEEEAbstract—A numerical simulator of satellite microwave-radiometric observations of orographically complex scenes,at var-ious frequencies and observation angles,has been developed.The Simulator of Topographic Artefacts in MIcrowave RAdiometry (STAMIRA)exploits the information on the relief,extracted from a digital elevation model,and has been applied to a test case con-cerning a mountainous area in the Alps by assuming a simplified land-cover scenario consisting of bare terrain with two kinds of roughness(smooth and rough soils).The1–10-GHz range has been considered to determine scattering and emission of soil and a nonscattering atmosphere has been supposed.The simulations have shown the large impact of the rotation of the polarization plane and of the brightness-temperature enhancement occurring for facets illuminated by radiation from the surrounding elevated terrain with respect toflat surfaces which scatter atmospheric downward radiation only.By considering also the antenna-pattern integration and the dependence of surface emissivity on the local observation angle,we have found that,for our case study,the brightness temperature is larger than that measured observing a flat terrain at horizontal polarization.At vertical polarization,the opposite occurs.These differences are analyzed and quantified.Index Terms—Microwave radiometry,relief,topography.I.I NTRODUCTIONH ILLY and mountainous areas represent important tar-gets to be observed from satellites,particularly for hy-drological purposes.Over these areas,the surface effective emission and the atmospheric contribution are considerably influenced by relief effects[1].Quantifying these effects on the measurements of a spaceborne microwave radiometer is important because most of the algorithms developed to retrieve biogeophysical parameters,such as soil moisture,assume that the Earth surface is horizontal,so that their application for areas with complex topography may lead to significant errors. Relief effects(i.e.,surface roughness at pixel scale)influ-ence the upwelling brightness temperature(T B)measured by a satellite microwave radiometer in numerous ways[1],[2].Manuscript received October1,2008;revised February13,2009and June3, 2009.First published October9,2009;current version published December23, 2009.The authors are with the Department of Electronic Engineering,Sapienza University of Rome,00184Rome,Italy(e-mail:nazzareno.pierdicca@ uniroma1.it;pulvirenti@die.uniroma1.it;marzano@die.uniroma1.it).Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/TGRS.2009.2028881The atmospheric contributions are modified due to their de-pendence on the surface altitude;the downwelling atmospheric radiation can be shadowed,and radiation emitted by one tilted surface can be reflected by another one.The radiometer local observation angle is function of the surface slope and aspect, and a depolarization effect occurs,due to a rotation of the linear-polarization plane.Moreover,at microwave frequencies, the highly inhomogeneous features of the mountainous sce-nario should be compared with the relatively large antenna footprints[3].In[1],Mätzler and Standley investigated the topography effects for microwave radiometry over land,particularly focus-ing on the shadowing of sky radiation by an elevated horizon. They stated that a pixel illuminated by the radiation coming from neighbors located at higher altitude increases its T B with respect to a pixel that scatters atmospheric downward radiation only.This increase was computed by assuming that the elevated surface was a blackbody at constant temperature and for a Lambertian scattering surface.The influence of the relief was also considered in[2],[4], and[5].These investigations concerned the L-band Soil Mois-ture and Ocean Salinity(SMOS)instrument(using aperture synthesis)only.In[2],the level of topography influencing the radiometric signal was assessed,while in[4]it has been shown that,for the region of Catalonia(Spain),the maximum varia-tions of local incidence angle due to topography can reach55◦. In a previous paper[3],we quantified the effect of the mod-ification of the local observation angle and of the rotation of the plane of linear polarization,due to the relief,on the emitted component of the upwelling brightness temperature.Moreover, we produced an approximated evaluation of the impact of these effects on soil-moisture retrieval because the study concerned frequency channels devoted to this application.In this paper,we aim at producing a complete quantifica-tion of the topographic effects on the spaceborne microwave-radiometric measurements thus complementing the previous studies on this topic.For this purpose,we have developed a software package named Simulator of Topographic Artefacts in MIcrowave RAdiometry(STAMIRA)that is able to operate at various microwave frequencies and observation angles.While the atmospheric contribution is accounted for through a radiative-transfer scheme already validated(e.g.,[6])and the thermal-emission contribution is computed as in[3],the original aspect of this paper is mainly represented by the0196-2892/$26.00©2009IEEEconsideration of the downwelling radiation scattered by thesurface.Its calculation,in the presence of a complex relief,isa fairly complicated task because the downwelling radiation isoriginated either from the sky or from the elevated landscape,depending on the horizon.In addition,the bistatic-scatteringbehavior of the surface has to be modeled.This paper is mainlyfocused on the representation of both coherent and incoherentcomponents of surface scattering and on the approach fordetermining the horizon and for discriminating radiation of theelevated terrain from sky radiation.The relief effects are quantified for a simple case study inwhich the land cover is supposed to be bare soil with twokinds of roughness(smooth and rough surfaces).To perform thesimulation,we have exploited a digital elevation model(DEM)of Italy,and we have focused on the Alpine region and itssurroundings.The1–10-GHz frequency range has been con-sidered,and we have assumed two radiometer configurationscharacterized by a conical scan.In Section II we give a description of the various proce-dures used to determine the components of the radiation atsatellite level that are included in the simulator.Section IIIanalyzes the results of our study,and Section IV draws the mainconclusions.II.S IMULATORThe numerical simulation of a radiometric observation of amountainous area is based on the following formula,whichexpresses the P-polarized upwelling T B(T BP),where P may be H(horizontal)or V(vertical),measured by a satelliteradiometer asT BP=(T em P+T sc P)t+T up(1) where T em P is the P-polarized emitted component,t is the atmospheric transmittance,T up is the atmospheric upwelling radiation,and T sc P is the P-polarized downwelling radiation scattered by the considered surface.In the following,we illus-trate the computation of the various terms of(1)implemented in STAMIRA.A.Atmospheric ParametersSTAMIRA includes two relationships,which express the trend of the radiative effective emitting temperature T mr (weighted-average profile temperature)of the atmosphere and of the atmospheric optical thicknessτversus the surface altitude z(derived from the DEM).The functional forms of these relationships depend on frequency.To infer them,we havefirst used a radiative-transfer scheme,assuming a plane-parallel nonscattering atmosphere[6],which has allowed us to compute both T mr andτfrom a set of vertical profiles of temperature,pressure,and relative humidity,obtained from meteorological analysisfields provided by the European Centre for Medium-Range Weather Forecasts(ECMWF).Note that, for the sake of simplicity,the upwelling and downwelling T mr have been supposed equal and the extraterrestrial T B has been approximated to the cosmic background(T cos=2.75K)at all the frequencies.Supposing a single narrow-band receiver,thefollowing relations applyt=e−τ/μ(2a)T up=T mr(1−t)(2b)T sky=T mr(1−t)+T cos t(2c) whereμis the cosine of the observation angle and T sky is thesky radiation.Considering that,in a mountainous zone,the atmosphericcontributions depend on the surface altitude,we have com-puted the T mr andτvalues for various surface heights and atdifferent times in order to derive their mean trend versus z.The spatial sampling extends over an Alpine zone of128×128km(the same considered in[3],whose limits are45.9◦−47.0◦N,7.7◦−9.4◦E).As for the temporal sampling, the ECMWF analysisfields have been collected throughoutthefirst ten days of each month of year2000.Then,we havefitted the trends with a polynomial relationship to determine theexpected atmospheric contribution of each DEM element as afunction of its height.For the X-band,we have used a second-degree polynomial for T mr and afirst-degree polynomial forτ,while at the C-and L-bands,the atmosphere is almost trans-parent in nonscattering conditions[7],with very small valuesofτalmost constant in the range of surface heights.Note that,although in this study we apply our simulator to the1–10-GHzfrequency range,we foresee extending the use of STAMIRA athigher frequencies too,where the atmosphere is far from beingtransparent,even in the absence of scattering phenomena.B.Emitted ComponentSTAMIRA includes a module for the computation of T em Pthat is based on the study carried out in[3].For each DEMelement,this module derives the aspect and the slope anglesand determines the local surface normal.Then,it calculatesthe angle between the surface normal and the radiometer-observation direction,i.e.,the local observation angleθl,andthe angle between the polarization-basis vectors in the globalcoordinate system(related to the direction of observation)andin the local coordinate system(related to the surface element),that is,the rotation angle of the polarization planeψ.Note thatthe radiometer-observation direction is not a constant term inthe simulator(see[3]),since a conical scanning is conceivedwith constant zenith angle but varying azimuth.To compute the emissivity of a facet of the DEM as a functionofθl,a model should be used.A semiempirical relationship thatis widely adopted to express the P-polarized emissivity of baresoil(e P)has the following form(e.g.,[7]–[10]):e P(θl)=1−Γsp P(θl)(1−q s)−q sΓsp Q(θl)exp(−h s).(3) In(3),q s is the polarization coupling factor,h s is a parameter related to the surface roughness(at wavelength scale),while Γsp P andΓsp Q are the P-and Q-polarized(either H,or V)specu-lar Fresnel reflectivities(one minus the emissivities).Several estimates for q s and h s were provided in the literature for various frequencies,which are generally derived from experi-mental data.PIERDICCA et al.:SIMULATING TOPOGRAPHIC EFFECTS ON SPACEBORNE OBSERV ATIONS275 To accomplish this study,we have chosen the q s and h svalues provided by the investigation of Wang et al.[8](here-after,the Wang model)because they concern L-,C-,andX-bands.Γsp H andΓsp V have been evaluated by applying the soil-permittivity model by Dobson et al.[11].We have considereda terrain characterized by fairly standard values of volumetricmoisture(0.25)and dry-soil density(1.3g/m3),while thefractions of sand and clay have been assumed equal to32%and25%,respectively,and two values of standard deviationof the surface height s,i.e.,s=0.73cm(smooth soil)ands=2.45cm(rough soil)have been used to describe the bare-soil roughness(as in[8]).After the calculation of e P as a function ofθl,this parameterhas to be referenced to the global coordinate system through[3]e P(θl,ψ)=e P(θl)cos2ψ+e Q(θl)sin2ψ.(4)Denoting by T s the soil temperature of a certain DEM elementat height z,the emitted component of the P-polarized T B atsatellite level isT em P=e P(θl,ψ)T s(z).(5)While in[3]we made the simplifying hypothesis of a constantsoil temperature T s,STAMIRA assumes that T s decreases withz according to a standard temperature gradient of6.5K/km.T s has been supposed constant with depth in the soil,overthe range of penetration of the considered frequencies,and thetemperature at sea level(z=0)has been supposed to be296K.C.Radiation Scattered by the SurfaceWhile the computation of the atmospheric parameters andof the emitted component relies on previous studies[3],[6],the algorithm to calculate T sc P in the presence of a complextopography is the most distinctive aspect of this work.The problem requires the knowledge of the soil bistatic-scattering coefficient.For the sake of computation efficiency,STAMIRA describes the surface bistatic-scattering behavior atP-polarization through a combination of a specular coherentcomponent and a term depending on the cosine of the scatteringzenith angleθs,representing the diffuse incoherent compo-nent.The cosinusoidal dependence onθs makes the incoher-ently scattered radiance independent of the scattering direction,in relatively good agreement with the trend of the bistatic-scattering coefficient versusθs(for a given incidence angle)that we have assessed in[12]by exploiting a well-establishedphysical model,such as the Advanced Integral Equation Model(AIEM)[13].With this simplifying assumption,consideringthe local coordinate system,the sumσ0P of the copolarized(σ0P P)and cross-polarized(σ0P Q)components of the bistatic-scattering coefficient is given byσ0P P+σ0P Q=σ0P=(1−α)σ00P(θl)cosθs+α4πΓsp P cosθlsinθspδ(θs−θsp)δ(ϕs−ϕsp)(6)whereθl,in this case,is the incidence angle in the local frame,δis the Dirac function,θsp andϕsp denote the zenith and azimuth angles corresponding to the specular direction,respectively,α,spanning the interval[0,1],represents the specularity factor,and the quantityσ00P describes the dependence of the incoher-ent component onθl.Note that the dependence ofσ0P P andσ0P Q on(θl,θs;ϕl,ϕs)has been omitted from the notation for conciseness.While in(3)θl is the direction from which the emissivityis observed,σ0P relates the magnitude of the power scatteredin the direction(θs,ϕs)to the power incident from(θl,ϕl) (see[14]).This apparently different meaning ofθl between(3) and(6)is due to the application of the Kirchhoff’s radiation law[14]which states that e P(θl)is equal to one minus the reflectivityΓP(θl).To determineΓP(θl),σ0P must be integrated over dΩs=sinθs dθs dϕs,and divided by4πcosθl.Perform-ing the same kind of integration for the product T skyσ0P,it is also possible to calculate T sc P(θl),as will be shown later on [see(9)].The result of the integration ofσ0P over the upper hemi-sphere isΓP(θl)=1−e P(θl)=14πcosθl2πσ0P dΩs=(1−α)σ00P(θl)4cosθl+αΓsp P(θl)=(1−α)Γin P+αΓsp P(θl).(7) In(7),we have putΓin P=σ00P(θl)/(4cosθl),where the su-perscript in indicates that this quantity originates from the incoherent component of the bistatic-scattering coefficient.To makeΓP(θl)consistent with the expression of e P(θl)given by (3),we have found the following:α=(1−q s)exp(−h s)(8a)Γin P=q sΓ0Q exp(−h s)/(1−α).(8b) Note that in this way,the amplitude of the coherent component contains a factor exp(−h s),thus,being in agreement with well-known relationships(e.g.,scalar approximation of the Physical Optics formulation,see[15])accounting for a factor equal to exp(−4k20cos2θl s2),which attenuates the specular coherent component for increasing roughness(k0is the wavenumber). Moreover,to verify reciprocity,Γin P should not depend on θl,so that an average over the range0◦–90◦has been car-ried out.Once both the coherent and incoherent components of thesurface reflectivity are known,we can express the downwellingbrightness temperature scattered by a facet of the DEM T sc P[see(1)]that(in the local frame)is given by[14]T sc P(θl)=1l2πσ0P T sky(θs)dΩs.(9)Note that,as mentioned in Section II-A,we have supposed a plane-parallel atmosphere so that T sky does not depend on the azimuth angle.276IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.48,NO.1,JANUARY2010Fig.1.Various contributions to T sc P.Left panel shows the horizon of asurface element and the corresponding solid angles under which the elementsees the(ΔΩground)elevated landscape and(ΔΩsky)the sky.Right panel shows the components of the brightness-temperature incident upon the surfacecoming from(T B ground)below and(T sky)above the horizon.In the presence of relief,each surface element(an individual cell of the DEM)may diffuse both the sky radiation and the radiation from the elevated landscape.Moreover,when substi-tuting the expression ofσ0P in(9),it has to be considered that cosθs is given by n·m,i.e.,by the scalar product between the unit vector n denoting the surface normal and that representing the direction from which the radiation impinges on the surface. By indicating withΔΩground the solid angle under which each individual facet observes the surrounding terrain,withΔΩsky the corresponding quantity for the sky radiation,and accounting for(6)–(8),we can writeT sc P=αΓsp P T sp+(1−α)Γin P/π×⎡⎢⎣ΔΩgroundT Bground(θs,ϕs,z)[n·m]dΩs+ΔΩskyT sky(θs,z)[n·m]dΩs⎤⎥⎦.(10)In(10),T sp is the brightness temperature coming from the specular direction(either from the sky,or from an elevated surface,depending on the horizon)being coherently reflected, while T B ground is the downwelling T B coming from below the horizon.Fig.1explains the geometry.The term within the square brackets in(10)is proportional to the irradiance incident on the surface.To discriminate between the irradiance due to the elevated terrain[first integral of (10)]and sky radiation(second integral),as well as to deter-mine the horizon of every DEM facet,we have implemented inside STAMIRA a module based on a sort of ray-tracing algorithm.We have considered a moving window centered around each facet,so that the rays originate from the center and their length depends on the dimension of the window,as shown in the upper panel of Fig.2,in which black lines mark grid boxes,while blue and red lines mark radiation paths.In particular,red lines cross pixels that are visible from the central one,while a blue line crosses hidden pixels.To make the central element of the window clearly identifiable,the rays crossing it are not drawn in thefigure.The central and bottom panels of Fig.2show the way STAMIRAfinds the horizon of each path anddiscriminates Fig.2.Ray-tracing algorithm applied to a moving window surrounding each DEM cell.Upper panel shows the paths between the center and the edge of the window.Black lines mark grid boxes,blue and red lines mark radiation paths crossing either(blue lines)hidden facets or(red lines)visible facets.Central and lower panels illustrate the evaluation of the horizon for each path and the discrimination of(red)visible DEM elements from(blue)hidden ones.Note that z is the vertical axis,while the axes x and y define the horizontal plane. See text for details.visible(blue)and hidden(red)facets.In these panels,r identi-fies the r th ray(r=1:N r,where N r denotes the number of paths between the center and the edge of the window)and P k indicates the k th facet crossed by the ray(k=1:N rk,where N rk denotes the number of pixel crossed by the r th ray).In this way,the dependence of unit vector m on(θs,ϕs),omittedfrom the notation of(10)for conciseness,is replaced by that on (r,k),and the symbol m rk is used in thefigure to make explicit this dependence(see bottom panel).First,STAMIRA scans the central coordinates of the DEM cells along the ray to search for the zenith angle of the horizon θr max and identifies the corresponding pixel(the peripheral facet P5in the example of the central panel of Fig.2),whose distance from the central one(C)is indicated as d max.Facets whose distance from C is larger than d max(P6and P7,in the example)are not visible.Then,STAMIRA performs a scan of the other pixels along the considered ray(distances that are less than or equal to d max):let us indicate asθk(k=1:N kr) the angle between the vertical axis(z)and the line connecting C with P k(CP k),withθsk(k=1:N kr)the angle betweenPIERDICCA et al.:SIMULATING TOPOGRAPHIC EFFECTS ON SPACEBORNE OBSERV ATIONS277 the normal to C(n)and m rk(see bottom panel where thecentral one is zoomed considering only C,P1,and P2),andwithθqk(k=1:N kr)the angle between the normal to P k(n k)and CP k(see bottom panel).The k th peripheral facet is visibleif all the following three conditions are verified:1)θk<θh(h=1:k−1);2)θsk<90◦;and3)θqk<90◦.It can be observed that,in the example of Fig.2,P1is notvisible(blue)becauseθq1>90◦.The same applies for P3thatis oriented in the same way as P1with respect to C(θq3is notshown for conciseness),while P4is hidden becauseθ4is largerthanθ2andθ3.It is worth noting thatθk is given by the sum ofθsk plus the slope angle of C.The computations ofθsk andθqkare described in Appendix A.Once STAMIRA has established that a peripheral pixel isvisible from the central one,it calculates the T B emitted bythe former toward the latter(i.e.,T B ground,which is a functionofθqk)and the solid angle under which the central facet ob-serves the peripheral oneΔΩground(r,k).All the contributionsoriginating from the elevated terrain arefinally summed,so thatthefirst integral in(10)has been approximated asI1=N rr=1N rkk=1[T ground(r,k,z)(n·m rk)ΔΩground(r,k)](11)where the polarizations are not indicated for conciseness and T B ground has been evaluated as in(5),so that it depends on the surface altitude z.To sum all the contributions from the elevated landscape,they have been referenced to the local coordinate system of the central pixel.Successively,T sc P has been referenced to the global frame[see(4)].To calculate the irradiance due to sky radiation,it must be considered that,for each ray,sky radiation is limited to the incidence angles between0◦andθr max(see Fig.2,central panel)so that the second integral in(10)can be approximated asI2=N rr=1⎡⎣Δϕrθr maxT sky(θs,z)[n·m r(θs)]dθs⎤⎦.(12)In(12),Δϕr represents the increment of the azimuth angle between two adjacent rays,and T sky has been computed through(2a)and(2c).With respect to(11),one integral has not been approximated through a sum because it can be computed in closed form,so that instead of m rk,we use the notation m r(θs).The integral has been tabulated for several values of θr max and z in order to speed up the computation.Note thatθr max in the absolute frame is upper limited by the maximum zenith angle ofπ/2in the local frame.Before ending this section,the computation of the specular component T sp in(10)has to be illustrated.According to the relationships written in Appendix B,STAMIRA adopts a “reverse ray tracing”and determines the DEM element in the specular direction s sp with respect to the observation direction, represented by the versor s i,as shown in Fig.3.Versor s sp can be represented also by the zenith angleθsp and the azimuth angleϕsp in the absolute frame(see Appendix B).Azimuth ϕsp defines,in the window centered on the element,the ray that has to be examined to determine whether T sp comesfrom Fig.3.Specular direction denoted by versor s sp,while s i represents the looking direction.Upper and lower panels show the cases in which the radiation coming from specular direction is originated from the sky and from the ground, respectively.the sky or from the elevated landscape(the r th ray in Fig.3). For this ray,ifθsp<θr max(upper panel of Fig.3),sky radi-ation is reflected,otherwise,the specular reflection concerns brightness temperature emitted from the ground(lower panel of Fig.3).In the former case,T sp has been computed through (2a)and(2c),while in the latter case,as in(5)and depends on polarization.The computation of the number N of DEM elements com-prised in every radiometric pixel,i.e.,in the instantaneousfield of view(IFOV)of the radiometer antenna,and the evaluation of the antenna temperature has beenfinally accomplished by making the same simplifying hypotheses as in[3],except for the estimation of the pixels affected by shadowing in the antenna footprint,which has been performed as in[4].We have supposed,for simplicity,that the antenna is characterized by major lobe efficiency equal to one and by a constant direc-tivity D within the major lobe.Indicating byθni the slope angle of the i th element of the DEM included in the consid-ered radiometric pixel,byθli its local observation angle,and by T BP i its brightness temperature,the antenna temperature T P(again,P represents polarization)has been approximated as[3]T P=Ni=1T BP i DΩiNi=1DΩi=Ni=1T BP i D A cosθliH2cosθniNi=1D A cosθliH2cosθni=Ni=1T BP i cosθlicosθniNi=1cosθlicosθni.(13) In(13),A is the area of a DEM element referenced to a projection on the horizontal plane so that its actual area is given by(A/cosθni),whileΩi is the solid angle under which the i th facet is observed by the radiometer that is equal to (A cosθli/H2cosθni),H being the range from the DEM cell to the satellite,which is constant,assuming a conical scanning, a narrow beam,and neglecting the cell height with respect to the satellite height(see[3]).278IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.48,NO.1,JANUARY2010III.N UMERICAL R ESULTSTo perform the quantification of the relief effects,we have fo-cused our analysis on a mountainous area in the Alps(NorthernItaly),and we have derived the topography from a DEM of Italyhaving a spatial resolution of250×250m.We have simulatedthe radiometric image for three frequency bands:1.4,6.9,and10.6GHz by assuming a conically scanning sensor.For thelatter two bands,we have made reference to the Advanced Mi-crowave Scanning Radiometer for the Earth Observing System(AMSR-E)[16],supposing a sensor observing the Earth at55◦from an altitude of705km.In this case,the following spatialresolutions have been supposed:75×43km for C-band and51×29km for X-band[16].For L-band(1.4GHz),we haveconsidered an instrument characterized by an observation angleof40◦and orbiting at670km of altitude,as that foreseen forthe radiometer aboard the future Soil Moisture Active Passive(SMAP)mission[17].A spatial resolution of40×40km hasbeen assumed.For the AMSR-E-like scan,we have considered a portion of512×512elements of the DEM(i.e.,128×128km)as in[3].This portion,comprising a wide range of topography(the heightstandard deviation is∼800m and the slope standard deviationis∼11◦),includes63radiometric pixels at C-band and99atX-band,if a radiometric pixel spacing of10×10km[16]isassumed.Regarding the L-band simulation,we have supposeda radiometric pixel spacing on the order of the dimension ofthe pixel itself,so that we have enlarged the area(1024×512elements,i.e.,256×128km),thus including21radiometricpixels.A further widening of the area would have implied avery high computational time.The results of the AMSR-E-like and SMAP-like simulationswill be discussed by making reference to two quantities.Thefirst one is T P−T P_flat,which is the difference between the antenna temperature calculated through(13)and that computedfor aflat terrain(having the same roughness,moisture,andcomposition)located at an altitude equal to the average ofthe heights of the DEM facets within the antenna IFOV.Theterm T P−T P_flat permits the evaluation of the topographic effects at subpixel scale on spaceborne radiometric observa-tions,which is the objective of this work.The second quantity is the corresponding one for the emitted component only, i.e.,T em P−T em P_flat.Plots of T P−T P flat versus T em P−T em P_flat will be shown in Sections III-A and III-B to single out both tilting effects on the emitted component and the effect of the shadowing of sky radiation on the scattered component.A.Results for C-and X-bandsFig.4shows the results of our simulation exercise obtained for the6-GHz and10-GHz frequency bands and for a rough soil(s=2.45cm).It can be noted that all the points are located above the principal diagonal(dashed line),i.e.,T P−T P_flat> T em P−T em P_flat.Moreover,at horizontal polarization (H-pol),both T P−T P_flat and T em P−T em P_flat are positive quantities,while the opposite occurs at V-pol.The results are shown in Table I,which reports the average values of T P−T P_flat and T em P−T em P_flat,computedover Fig. 4.T P−T P_flat versus T em P−T em P_flat for a rough soil (s=2.45cm).(Left panels)H polarization.(Right panels)V polarization. (Upper panels)C-band.(Lower panels)X-band.TABLE IA VERAGE V ALUES[K]OF T P−T P_flat AND T em P−T em P_flatthe63points for C-band and99for X-band considered in Fig.4. It includes the case of a smooth soil(s=0.73cm),too.The results for the emitted component were analyzed in de-tail in[3].Although in[3]a different emissivity model[18]was used and the dependence of surface temperature on height was neglected,the positive impact of variable terrain on the emitted component at H-pol(i.e.,T em H−T em H_flat>0)and the negative one at V-pol are supported in this study,too,so that the analyses we made to explain this behavior still apply.To briefly summarize thosefindings,the values of T em P−T em P_flat are mainly determined by:1)dependence of emissivity on local observation angleθl;2)rotation of the polarization plane;and 3)antenna-pattern integration.As for the dependence onθl, the emissivity decreases with the increase ofθl,except for a smooth soil at V-pol.In this case,the decrease occurs forθl larger than the Brewster angle,as shown in Fig.5.The coupling of the polarizations causes an increase of the emissivity at H-pol and a decrease at V-pol.The beam-weighted integration due to the antenna tends to raise the antenna temperature at H-pol because,according to the Wang model,the facets with the highest emissivity are those whoseθli is small(see Fig.5). Since for these facets,(cosθli/cosθni)is large[see(13)],they appear to the radiometer under a larger solid angle.As for V-pol,the facets producing the highest emission are observed at the Brewster angle,where(cosθli/cosθni)is relatively small. The increase of T P−T P_flat with respect to T em P−T em P_flat indicates that T sc P(scattered component of the T B at satellite level)in the presence of the relief is larger than。