当前位置:文档之家› 人脸识别论文文献翻译中英文

人脸识别论文文献翻译中英文

人脸识别论文文献翻译中英文
人脸识别论文文献翻译中英文

人脸识别论文文献翻译中英文

人脸识别论文中英文

附录(原文及译文)

翻译原文来自

Thomas David Heseltine BSc. Hons. The University of York

Department of Computer Science

For the Qualification of PhD. -- September 2005 -

《Face Recognition: Two-Dimensional and Three-Dimensional Techniques》

4 Two-dimensional Face Recognition

4.1 Feature Localization

Before discussing the methods of comparing two facial images we now take a brief look at some at the preliminary processes of facial feature alignment. This process typically consists of two stages: face detection and eye localisation. Depending on the application, if the position of the face within the image is known beforehand (for a cooperative subject in a door access system for example) then the face detection stage can often be skipped, as the region of interest is already known. Therefore, we discuss eye localisation here, with a brief discussion of face detection in the literature review(section 3.1.1).

The eye localisation method is used to align the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are

representative of the face recognition accuracy and not a product of the performance of the eye localisation routine, all image alignments are manually checked and any errors corrected, prior to testing and evaluation.

We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.

Figure 4-1 - The average eyes. Used as a template for eye detection.

Both eyes are included in a single template, rather than

individually searching for each eye in turn, as the characteristic symmetry of the eyes either side of the nose, provides a useful feature that helps distinguish between the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the

camera) and also introduces the assumption that eyes in the image appear near horizontal. Some preliminary experimentation also reveals that it is advantageous to include the area of skin just beneath the eyes. The reason being that in some cases the eyebrows can closely match the template, particularly if there are shadows in the eye-sockets, but the area of skin below the eyes helps to distinguish the eyes from eyebrows (the area just below the eyebrows contain eyes, whereas the area below the eyes contains only plain skin).

A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller

template of the individual left and right eyes then refines each eye position.

This basic template-based method of eye localisation, although providing fairly preciselocalisations, often fails to locate the eyes completely. However, we are able to improve performance by including a weighting scheme.

Eye localisation is performed on the set of training images, which

is then separated into two sets: those in which eye detection was successful; and those in which eye detection failed. Taking the set of successful localisations we compute the average distance from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as we

would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from

the average eye template.

Figure 4-2 – Distance to the eye template for successful detections (top) indicating variance due to

noise and failed detections (bottom) showing credible variance due

to miss-detected features.

In the lower image (Figure 4-2 bottom), we have taken the set of failed localisations(images of the forehead, nose, cheeks, background etc. falsely detected by the localisation routine) and once again computed the average distance from the eye template. The bright pupils surrounded by darker areas indicate that a failed match is often due to the high correlation of the nose and cheekbone regions overwhelming the poorly correlated pupils. Wanting to emphasise the

2

difference of the pupil regions for these failed matches and minimise the variance of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as shown in Figure 4-3. When applied to the difference image before summing a total error, this weighting scheme provides a much improved detection rate.

Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.

4.2 The Direct Correlation Approach

We begin our investigation into face recognition with perhaps the simplest approach,known as the direct correlation method (also referred to as template matching by Brunelli and Poggio [ 29 ]) involving the direct comparison of pixel intensity values taken from facial images. We use the term ‘Direct Correlation’ to encompass all techniques in which face images are compared directly, without any form of image space

analysis, weighting schemes or feature extraction, regardless of the distance metric used. Therefore, we do not infer that Pearson’s correlation is applied as the similarity function (although such an approach would obviously come under our definition of direct correlation). We typically use the Euclidean distance as our metric in these investigations (inversely related to Pearson’s correlation and can be considered as a scale and translation sensitive form of image correlation), as this persists with the contrast made between image space and subspace approaches in later sections.

Firstly, all facial images must be aligned such that the eye centres are located at two specified pixel coordinates and the image cropped to remove any background

information. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recognition converted into a vector of 5330 elements (each element containing the corresponding pixel intensity value). Each corresponding vector can be thought of as describing a point within a 5330 dimensional image space. This simple principle can easily be extended to much larger images: a 256 by 256 pixel image occupies a single point in 65,536-dimensional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculating the Euclidean distance d, between two facial image vectors (often referred to as the

query image q, and gallery image g), we get an indication of similarity. A threshold is then

applied to make the final verification decision.

d q g (d threshold ?accept d threshold ?reject ) . Equ. 4-1

3

4.2.1 Verification Tests

The primary concern in any face recognition system is its ability to correctly verify a claimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system’s ability to perform these tasks, a variety of evaluation methodologies have arisen. Some of these analysis methods simulate a specific mode of operation (i.e. secure site access or surveillance), while others provide a more mathematical description of data distribution in some

classification space. In addition, the results generated from each analysis method may be presented in a variety of formats. Throughout the experimentations in this thesis, we primarily use the verification test as our method of analysis and comparison, although we also use Fisher’s Linear Discriminant to analyse individual subspace components in section 7 and the identification test for the final evaluations described in section 8. The verification test measures a system’s ability to correctly accept or reject the proposed identity of an individual. At a functional level, this reduces to two images being presented for

comparison, for which the system must return either an acceptance (the two images are of the same person) or rejection (the two images are of different people). The test is designed to simulate the application area of secure site access. In this scenario, a subject will present some form of identification at a point of entry, perhaps as a swipe card, proximity chip or PIN number. This number is then used to retrieve a stored image from a database of known subjects (often referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is then granted depending on the acceptance/rejection decision.

The results of the test are calculated according to how many times the accept/reject decision is made correctly. In order to execute this test we must first define our test set of face images. Although the number of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficiently large such that statistical anomalies become insignificant (for example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recognition systems, they must be applied to the same test set.

However, it should also be noted that if the results are to be representative of system performance in a real world situation, then the test data should be captured under precisely the same circumstances as

in the application environment.On the other hand, if the purpose of the experimentation is to evaluate and improve a method of face recognition, which may be applied to a range of application environments, then the test data should present the range of difficulties that are to be overcome. This may mean including a greater percentage of ‘difficult’ images than

4

would be expected in the perceived operating conditions and hence higher error rates in the results produced. Below we provide the algorithm for executing the verification test. The algorithm is applied to a single test set of face images, using a single function call to the face recognition algorithm: CompareFaces(FaceA, FaceB). This call is used to compare two facial images, returning a distance score indicating how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of different faces should produce high scores.

Every image is compared with every other image, no image is compared with itself and no pair is compared more than once (we assume that the relationship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are of the same person or different people. In practical

tests this information is often encapsulated as part of the image filename (by means of a unique person identifier). Scores are then

stored in one of two lists: a list containing scores produced by comparing images of different people and a list containing scores produced by comparing images of the same person. The final

acceptance/rejection decision is made by application of a threshold. Any incorrect decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the percentage of scores from the same people that were classified as rejections. The false acceptance rate (FAR) is calculated as the percentage of scores from different people that were classified as acceptances.

For IndexA = 0 to length(TestSet)

For IndexB = IndexA+1 to length(TestSet)

Score = CompareFaces(TestSet[IndexA], TestSet[IndexB])

If IndexA and IndexB are the same person

Append Score to AcceptScoresList

Else

Append Score to RejectScoresList

For Threshold = Minimum Score to Maximum Score:

FalseAcceptCount, FalseRejectCount = 0

For each Score in RejectScoresList

If Score <= Threshold

Increase FalseAcceptCount

For each Score in AcceptScoresList

If Score > Threshold

Increase FalseRejectCount

5

FalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList)

FalseRejectRate = FalseRejectCount / length(RejectScoresList)

Add plot to error curve at (FalseRejectRate, FalseAcceptRate)

These two error rates express the inadequacies of the system when operating at a specific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by altering the threshold value) will inevitably result in increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the entire range of scores produced. The application of each threshold value produces an additional FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.

6

Figure 4-5 - Example Error Rate Curve produced by the verification test.

The equal error rate (EER) can be seen as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performance of a biometric system and allows for easy visual comparison of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world application. It is unlikely that any real system would use a threshold value such that the percentage of false acceptances were equal to the percentage of false rejections. Secure site access systems would typically set the threshold such that false acceptances were significantly lower than false rejections: unwilling to tolerate intruders at the cost of inconvenient access denials. Surveillance systems on the other hand would require low false rejection rates to successfully identify people in a less controlled environment. Therefore we should bear in mind that a system with a lower EER might not necessarily be the better performer towards the extremes of its operating capability.

There is a strong connection between the above graph and the

receiver operating characteristic (ROC) curves, also used in such experiments. Both graphs are simply two visualisations of the same results, in that the ROC format uses the True Acceptance Rate(TAR), where TAR = 1.0 – FRR in place of the FRR, effectively flipping the

graph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This presentation format provides a reference to determine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves intersect.

7

Figure 4-6 - Example error rate curve as a function of the score threshold

The fluctuation of these error curves due to noise and other errors is dependant on the number of face image comparisons made to generate the data. A small dataset that only allows for a small number of comparisons will results in a jagged curve, in which large steps correspond to the influence of a single image on a high proportion of the

comparisons made. A typical dataset of 720 images (as used in

section 4.2.2) provides 258,840 verification operations, hence a drop of 1% EER represents an additional 2588 correct decisions, whereas the quality of a single image could cause the EER to

fluctuate by up to 0.28.

4.2.2 Results

As a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a likeness score, providing 258,840 verification operations from which to calculate false acceptance rates and false rejection rates. The error curve produced is shown in Figure 4-7.

Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.

We see that an EER of 25.1% is produced, meaning that at the EER threshold

8

approximately one quarter of all verification operations carried out resulted in an incorrect classification. There are a number of well-known reasons for this poor level of accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to change dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person’s face. The distance between images of different people becomes smaller than the area of face space covered by images of the same person and hence false acceptances and false rejections occur frequently. Other disadvantages include the large amount of storage necessary for holding many face images and the intensive processing required for each comparison, making this method unsuitable for applications applied to a large database. In section 4.3 we explore the eigenface method, which attempts to address some of these issues.

4 二维人脸识别

4.1 功能定位

在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步

调整过

程。这一过程通常两个阶段组成:人脸检测和眼睛定位。根据不同的申请时,如果在面部

图像的立场是众所周知事先(对于合作的主题,例如在门禁系统),那么人脸检测阶段通

常可以跳过,作为地区的利益是已知的。因此,我们讨论眼睛定位在这里,有一个人脸检

测的文献简短讨论(第3.1.1)。眼睛定位方法用于对齐的各种测试二维人脸图像集通篇使

用这一节。但是,为了确保所有的结果都呈现代表面部识别准确率,而不是对产品的性能

例行的眼睛定位,所有图像路线是手动检查,若有错误更正前的测试和评价。我们发现在

一个使用图像的眼睛一个简单的基于模板的位置方法。训练集的前脸手动对齐图像是采取

和各图片进行裁剪,以两只眼睛周围的地区。平均计算,用形象作为一个模板。

图4-1 - 平均眼睛,用作模板的眼睛检测

两个眼睛都包括在一个模板,而不是单独为每个搜索,因为眼睛的任一鼻子两边对称

的特点,提供了一个有用的功能,可以帮助区分眼睛和其他可能误报被拾起的背景。虽然

这种方法在介绍了假设眼近水平的形象出现后很容易受到规模(即主体距离相机)的影响,

但一些初步试验还显示,还是有利于包括眼睛下方的皮肤区域得,因为在某些情况下,眉

毛可以密切配合模板,特别是如果有在眼插座的阴影。此外眼睛以下的皮肤面积有助于区

分从眉毛(眉毛下方的面积眼中包含的眼睛,而该地区眼睛下面的皮肤只含有纯)。窗口

9

是通过对测试图像和绝对差采取的这一平均眼睛上面显示的图像。图像的最低差额面积作为含有眼中感兴趣的区域。运用同样的程序使用小模板个人左,右眼,然后提炼每只眼睛的位置。

这个基本模板的眼睛定位方法,尽管提供相当精确的本地化,往往不能找到完全的眼睛。但是,我们能够改善计划包括加权性能。

眼睛定位是在执行训练图像,然后被分成集两套:在哪些眼检测成功的,和那些在其中眼检测失败的。以成功的本地化设置,我们在计算平均距离眼睛模板(图4-2顶部)时,请注意,该图像是非常黑暗的,这表明发现眼睛密切相关的眼睛模板,正如我们期望的那样。然而,亮点确实发生靠近眼睛的白人,这表明这方面经常是不一致的,不同于普通模板。

图4-2 - 距离对眼睛模板成功检测(上),指出由于方差噪音和失败的检测(下)显示,

由于错过可信的差异,检测功能。

在较低的图像(图4-2下),我们已经采取了失败的本地化设置(在前额,鼻子图像,脸颊,背景等虚假的检测本地化例程),并再次从眼睛计算的平均距离模板。明亮的学生由暗区包围表明,一个失败的匹配往往由于鼻子和颧骨地区绝大多

数的高相关性差相关的学生。想强调地区差异的学生为这些失败的比赛,尽量减少对眼睛的白人成功的变异比赛中,我们除以上的形象价值较低的图像产生重矢量,如图4-3所示。当应用到差分图像在总结前一总误差,这个比重计划提供了一个很大的提高检出率。

图 4-3

4.2直接相关方法

我们把最简单的方法人脸识别调查称为直接相关方法(也称为模板匹配的布鲁内利和波焦[29])所涉及的像素亮度值直接比较取自面部图像。我们使用的术语'直接关系',以涵盖所有在图像技术所面临的直接比较,没有任何形式的形象空间分析,加权计划或特征提取,无论距离

度量使用。因此,我们并不推断,皮尔逊的相关性,作为应用相似的功能(尽管这种做法显然会受到我们的直接相关的定义)。我们通常使用欧氏距离度量作为我们的在这些调查(负相关,Pearson相关,可以考虑作为一个规模和翻译的图像相关敏感的形式),因为这

10

与坚持对比了空间和子空间与图像的方法在后面的章节。

首先,所有的面部图像必须保持一致,这样的眼睛在两个中心位于指定的像素坐标和裁剪,以消除任何背景的图像信息。这些图像存储为65和82像素灰度位图前进入了5330元素(每个元素包含向量转换确认相应的像素强度值)。每一个对应的向量可以被认为是说明在5330点的三维图像空间。这个简单的原则很容易被推广到更大的照片:由256像素的图像256占用一个

在65,536维图像空间,并再次指出,类似的图像占据接近点在该空间。同样,类似的面孔靠近一起在图像空间,而不同的面间距相距甚远。计算欧几里得距

离d,两个人脸图像向量(通常称为查询图像Q和画廊图像克),我们得到一个相似的迹象。然后用一个阈值,使

最后核查的决定。

d q g (d threshold ?accept ) (d threshold ?reject ) . Equ. 4-1

4.2.1验证测试

任何一个人脸识别系统的主要关注的是它能够正确地验证声称的身份或确定一个人的最可能的身份从一个潜在的集合数据库中。为了评估一个给定的系统的能力来执行这些任务,采用不同的评价方法。其中的一些分析方法模拟一个具体的运作模式(即安全网站的访问或监视),而

其他人提供更多的数据分布的数学描述中的一些分类空间。此外,每个分析结果产生的方法可能

提交的各种格式。在本论文的整个实验,我们主要使用验证考验我们的方法分析和比较,虽然我们也使用费舍尔的线性判别分析在第7个个人组件和子空间鉴定试验中的第8条所述的最终评价。核查措施的测试系统的能力,正确地接受或拒绝一个人的身份提出。在一个功能级别,这样可以减少到两个图像正在为比较介绍,该系统必须对任何一个接受返回(两个图像是同一人)或拒绝(两个不同的图像人)。该测试旨在模拟安全网站访问的应用领域。在这种情况下,一个主题将在一入境点一些形式的身份证件,或许是刷卡,接近芯片或PIN号码。这个数字,然后用于检索数据库中的已知对象通常被称为目标(1存储的图像画廊或图像),并在入境点(捕获的现场图像比较查询图像)。访问是根据当时获得的接受/拒绝的决定。

测试结果计算出多少倍的接受/拒绝决定是正确的。为了执行这项测试中,我们必须首先确定我们的测试人脸图像集。虽然这些图像的测试集的数量不会影响结果产生的误差(利率作为形象比较百分比指定),但重要的是要确保测试集是足够

大,这样的统计异常变得不重要(例如,一个非常一致的匹配以及图像的情侣)。另外,影像的类型(照明高度变化,部分遮挡等)将显着改变的结果测试。因此,为了比较多个面部识别系统,他们必须适用于相同的测试集。但是,还应该指出,如果结果将系统性能的代表在现实世界中

11

的情况,然后测试数据应根据被捕获正是在同样情况下的应用环境。另一方面,如果该实验的目的是评估和完善人脸识别方法,可应用到产品的应用范围环境,那么测试数据应目前的困难,要范围克服。这可能意味着包括一个'难'的图片比这个大的百分比可以预期的操作条件,因此被认为较高的错误率产生的结果。

以下我们提供了执行验证测试算法。该算法适用于单个测试人脸图像集,使用一个函数调用在脸上识别算法:CompareFaces(FaceA,FaceB)。这一呼吁是用来比较两个面部图像,返回距离评分说明如何在两个不同的人脸图像为:得分越低越相似的两个人脸图像。理想情况下,图像的同样面对的是要生产低分数,而应产生不同的面孔图像高分。每一个形象,与所有其他形象相比,没有图像进行比较,并与自身没有一双比较不止一次(我们假设关系是对称的)。当两个图像进行比较,产生相似性评分,地面真相用于确定是否对图像的同一人或不同的人。在实际这些信息往往是测试封装为图片文件名通过一个手段(部分独特的人标识符)。比分然后存储在两个列表之一:一份列出通过比较不同人的形象和产品清单,其中分数通过比较产生的同一人图像。最终的接受/拒绝决定是由一个门槛的申请。任何不正确的决定,记为无论是虚假或错误拒绝接受。该错误拒绝率(FRR)的计算方法作为得分从被认为是在拒绝归类相同的百分比。该错误接受率(FAR)是按不同的分数比例被认为是在接受归类的人。

For IndexA = 0 to length(TestSet) For IndexB = IndexA+1 to

length(TestSet) Score = CompareFaces(TestSet[IndexA], TestSet[IndexB])

If IndexA and IndexB are the same person Append Score to AcceptScoresList Else

Append Score to RejectScoresList For Threshold = Minimum Score to Maximum Score: FalseAcceptCount, FalseRejectCount = 0 For each Score in RejectScoresList If Score <= Threshold

Increase FalseAcceptCount

For each Score in AcceptScoresList If Score > Threshold

Increase FalseRejectCount

FalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList)

FalseRejectRate = FalseRejectCount / length(RejectScoresList)

Add plot to error curve at (FalseRejectRate, FalseAcceptRate)

12

这两个错误率反映了系统的不足之处时,在一经营特定的阈值。理想情况下,这两个数字应该是零,但在现实中无论是远或减少财政资源规则(通过改变阈值)将不可避免地导致在增加其他。因此,为了描述一个完整的工作范围尤其是系统的,我们通过不同的分数范围的阈值来产生。每个阈值应用程序产生一个额外的容积率,财政资源规则对,它绘制在图表上时产生的错误率曲线所示。

论文外文文献翻译3000字左右

南京航空航天大学金城学院 毕业设计(论文)外文文献翻译 系部经济系 专业国际经济与贸易 学生姓名陈雅琼学号2011051115 指导教师邓晶职称副教授 2015年5月

Economic policy,tourism trade and productive diversification (Excerpt) Iza Lejárraga,Peter Walkenhorst The broad lesson that can be inferred from the analysis is that promoting tourism linkages with the productive capabilities of a host country is a multi-faceted approach influenced by a variety of country conditions.Among these,fixed or semi-fixed factors of production,such as land,labor,or capital,seem to have a relatively minor influence.Within the domain of natural endowments,only agricultural capital emerged as significant.This is a result that corresponds to expectations,given that foods and beverages are the primary source of demand in the tourism economy.Hence,investments in agricultural technology may foment linkages with the tourism market.It is also worth mentioning that for significant backward linkages to emerge with local agriculture,a larger scale of tourism may be important. According to the regression results,a strong tourism–agriculture nexus will not necessarily develop at a small scale of tourism demand. It appears that variables related to the entrepreneurial capital of the host economy are of notable explanatory significance.The human development index(HDI), which is used to measure a country's general level of development,is significantly and positively associated with tourism linkages.One plausible explanation for this is that international tourists,who often originate in high-income countries,may feel more comfortable and thus be inclined to consume more in a host country that has a life-style to which they can relate easily.Moreover,it is important to remember that the HDI also captures the relative achievements of countries in the level of health and education of the population.Therefore,a higher HDI reflects a healthier and more educated workforce,and thus,the quality of local entrepreneurship.Related to this point,it is important to underscore that the level of participation of women in the host economy also has a significantly positive effect on linkages.In sum, enhancing local entrepreneurial capital may expand the linkages between tourism and other sectors of the host country.

毕业论文英文参考文献与译文

Inventory management Inventory Control On the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion. The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility. Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored: First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments . Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field of

概率论毕业论文外文翻译

Statistical hypothesis testing Adriana Albu,Loredana Ungureanu Politehnica University Timisoara,adrianaa@aut.utt.ro Politehnica University Timisoara,loredanau@aut.utt.ro Abstract In this article,we present a Bayesian statistical hypothesis testing inspection, testing theory and the process Mentioned hypothesis testing in the real world and the importance of, and successful test of the Notes. Key words Bayesian hypothesis testing; Bayesian inference;Test of significance Introduction A statistical hypothesis test is a method of making decisions using data, whether from a controlled experiment or an observational study (not controlled). In statistics, a result is called statistically significant if it is unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by Ronald Fisher: "Critical tests of this kind may be called tests of significance, and when such tests are available we may discover whether a second sample is or is not significantly different from the first."[1] Hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis. In frequency probability,these decisions are almost always made using null-hypothesis tests. These are tests that answer the question Assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at [] least as extreme as the value that was actually observed?) 2 More formally, they represent answers to the question, posed before undertaking an experiment,of what outcomes of the experiment would lead to rejection of the null hypothesis for a pre-specified probability of an incorrect rejection. One use of hypothesis testing is deciding whether experimental results contain enough information to cast doubt on conventional wisdom. Statistical hypothesis testing is a key technique of frequentist statistical inference. The Bayesian approach to hypothesis testing is to base rejection of the hypothesis on the posterior probability.[3][4]Other approaches to reaching a decision based on data are available via decision theory and optimal decisions. The critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. The critical region is usually denoted by the letter C. One-sample tests are appropriate when a sample is being compared to the population from a hypothesis. The population characteristics are known from theory or are calculated from the population.

毕业论文外文翻译模版

吉林化工学院理学院 毕业论文外文翻译English Title(Times New Roman ,三号) 学生学号:08810219 学生姓名:袁庚文 专业班级:信息与计算科学0802 指导教师:赵瑛 职称副教授 起止日期:2012.2.27~2012.3.14 吉林化工学院 Jilin Institute of Chemical Technology

1 外文翻译的基本内容 应选择与本课题密切相关的外文文献(学术期刊网上的),译成中文,与原文装订在一起并独立成册。在毕业答辩前,同论文一起上交。译文字数不应少于3000个汉字。 2 书写规范 2.1 外文翻译的正文格式 正文版心设置为:上边距:3.5厘米,下边距:2.5厘米,左边距:3.5厘米,右边距:2厘米,页眉:2.5厘米,页脚:2厘米。 中文部分正文选用模板中的样式所定义的“正文”,每段落首行缩进2字;或者手动设置成每段落首行缩进2字,字体:宋体,字号:小四,行距:多倍行距1.3,间距:前段、后段均为0行。 这部分工作模板中已经自动设置为缺省值。 2.2标题格式 特别注意:各级标题的具体形式可参照外文原文确定。 1.第一级标题(如:第1章绪论)选用模板中的样式所定义的“标题1”,居左;或者手动设置成字体:黑体,居左,字号:三号,1.5倍行距,段后11磅,段前为11磅。 2.第二级标题(如:1.2 摘要与关键词)选用模板中的样式所定义的“标题2”,居左;或者手动设置成字体:黑体,居左,字号:四号,1.5倍行距,段后为0,段前0.5行。 3.第三级标题(如:1.2.1 摘要)选用模板中的样式所定义的“标题3”,居左;或者手动设置成字体:黑体,居左,字号:小四,1.5倍行距,段后为0,段前0.5行。 标题和后面文字之间空一格(半角)。 3 图表及公式等的格式说明 图表、公式、参考文献等的格式详见《吉林化工学院本科学生毕业设计说明书(论文)撰写规范及标准模版》中相关的说明。

java毕业论文外文文献翻译

Advantages of Managed Code Microsoft intermediate language shares with Java byte code the idea that it is a low-level language witha simple syntax , which can be very quickly translated intonative machine code. Having this well-defined universal syntax for code has significant advantages. Platform independence First, it means that the same file containing byte code instructions can be placed on any platform; atruntime the final stage of compilation can then be easily accomplished so that the code will run on thatparticular platform. In other words, by compiling to IL we obtain platform independence for .NET, inmuch the same way as compiling to Java byte code gives Java platform independence. Performance improvement IL is actually a bit more ambitious than Java bytecode. IL is always Just-In-Time compiled (known as JIT), whereas Java byte code was ofteninterpreted. One of the disadvantages of Java was that, on execution, the process of translating from Javabyte code to native executable resulted in a loss of performance. Instead of compiling the entire application in one go (which could lead to a slow start-up time), the JITcompiler simply compiles each portion of code as it is called (just-in-time). When code has been compiled.once, the resultant native executable is stored until the application exits, so that it does not need to berecompiled the next time that portion of code is run. Microsoft argues that this process is more efficientthan compiling the entire application code at the start, because of the likelihood that large portions of anyapplication code will not actually be executed in any given run. Using the JIT compiler, such code willnever be compiled.

毕业论文外文翻译模板

农村社会养老保险的现状、问题与对策研究社会保障对国家安定和经济发展具有重要作用,“城乡二元经济”现象日益凸现,农村社会保障问题客观上成为社会保障体系中极为重要的部分。建立和完善农村社会保障制度关系到农村乃至整个社会的经济发展,并且对我国和谐社会的构建至关重要。我国农村社会保障制度尚不完善,因此有必要加强对农村独立社会保障制度的构建,尤其对农村养老制度的改革,建立健全我国社会保障体系。从户籍制度上看,我国居民养老问题可分为城市居民养老和农村居民养老两部分。对于城市居民我国政府已有比较充足的政策与资金投人,使他们在物质和精神方面都能得到较好地照顾,基本实现了社会化养老。而农村居民的养老问题却日益突出,成为摆在我国政府面前的一个紧迫而又棘手的问题。 一、我国农村社会养老保险的现状 关于农村养老,许多地区还没有建立农村社会养老体系,已建立的地区也存在很多缺陷,运行中出现了很多问题,所以完善农村社会养老保险体系的必要性与紧迫性日益体现出来。 (一)人口老龄化加快 随着城市化步伐的加快和农村劳动力的输出,越来越多的农村青壮年人口进入城市,年龄结构出现“两头大,中间小”的局面。中国农村进入老龄社会的步伐日渐加快。第五次人口普查显示:中国65岁以上的人中农村为5938万,占老龄总人口的67.4%.在这种严峻的现实面前,农村社会养老保险的徘徊显得极其不协调。 (二)农村社会养老保险覆盖面太小 中国拥有世界上数量最多的老年人口,且大多在农村。据统计,未纳入社会保障的农村人口还很多,截止2000年底,全国7400多万农村居民参加了保险,占全部农村居民的11.18%,占成年农村居民的11.59%.另外,据国家统计局统计,我国进城务工者已从改革开放之初的不到200万人增加到2003年的1.14亿人。而基本方案中没有体现出对留在农村的农民和进城务工的农民给予区别对待。进城务工的农民既没被纳入到农村养老保险体系中,也没被纳入到城市养老保险体系中,处于法律保护的空白地带。所以很有必要考虑这个特殊群体的养老保险问题。

大学毕业论文---软件专业外文文献中英文翻译

软件专业毕业论文外文文献中英文翻译 Object landscapes and lifetimes Tech nically, OOP is just about abstract data typing, in herita nee, and polymorphism, but other issues can be at least as importa nt. The rema in der of this sect ion will cover these issues. One of the most importa nt factors is the way objects are created and destroyed. Where is the data for an object and how is the lifetime of the object con trolled? There are differe nt philosophies at work here. C++ takes the approach that con trol of efficie ncy is the most importa nt issue, so it gives the programmer a choice. For maximum run-time speed, the storage and lifetime can be determined while the program is being written, by placing the objects on the stack (these are sometimes called automatic or scoped variables) or in the static storage area. This places a priority on the speed of storage allocatio n and release, and con trol of these can be very valuable in some situati ons. However, you sacrifice flexibility because you must know the exact qua ntity, lifetime, and type of objects while you're writing the program. If you are trying to solve a more general problem such as computer-aided desig n, warehouse man ageme nt, or air-traffic con trol, this is too restrictive. The sec ond approach is to create objects dyn amically in a pool of memory called the heap. In this approach, you don't know un til run-time how many objects you n eed, what their lifetime is, or what their exact type is. Those are determined at the spur of the moment while the program is runnin g. If you n eed a new object, you simply make it on the heap at the point that you n eed it. Because the storage is man aged dyn amically, at run-time, the amount of time required to allocate storage on the heap is sig ni fica ntly Ion ger tha n the time to create storage on the stack. (Creat ing storage on the stack is ofte n a si ngle assembly in structio n to move the stack poin ter dow n, and ano ther to move it back up.) The dyn amic approach makes the gen erally logical assumpti on that objects tend to be complicated, so the extra overhead of finding storage and releas ing that storage will not have an importa nt impact on the creati on of an object .In additi on, the greater flexibility is esse ntial to solve the gen eral program ming problem. Java uses the sec ond approach, exclusive". Every time you want to create an object, you use the new keyword to build a dyn amic in sta nee of that object. There's ano ther issue, however, and that's the lifetime of an object. With Ian guages that allow objects to be created on the stack, the compiler determines how long the object lasts and can automatically destroy it. However, if you create it on the heap the compiler has no kno wledge of its lifetime. In a Ianguage like C++, you must determine programmatically when to destroy the

毕业论文5000字英文文献翻译

英文翻译 英语原文: . Introducing Classes The only remaining feature we need to understand before solving our bookstore problem is how to write a data structure to represent our transaction data. In C++ we define our own data structure by defining a class. The class mechanism is one of the most important features in C++. In fact, a primary focus of the design of C++ is to make it possible to define class types that behave as naturally as the built-in types themselves. The library types that we've seen already, such as istream and ostream, are all defined as classesthat is,they are not strictly speaking part of the language. Complete understanding of the class mechanism requires mastering a lot of information. Fortunately, it is possible to use a class that someone else has written without knowing how to define a class ourselves. In this section, we'll describe a simple class that we canuse in solving our bookstore problem. We'll implement this class in the subsequent chapters as we learn more about types,expressions, statements, and functionsall of which are used in defining classes. To use a class we need to know three things: What is its name? Where is it defined? What operations does it support? For our bookstore problem, we'll assume that the class is named Sales_item and that it is defined in a header named Sales_item.h. The Sales_item Class The purpose of the Sales_item class is to store an ISBN and keep track of the number of copies sold, the revenue, and average sales price for that book. How these data are stored or computed is not our concern. To use a class, we need not know anything about how it is implemented. Instead, what we need to know is what operations the class provides. As we've seen, when we use library facilities such as IO, we must include the associated headers. Similarly, for our own classes, we must make the definitions associated with the class available to the compiler. We do so in much the same way. Typically, we put the class definition into a file. Any program that wants to use our class must include that file. Conventionally, class types are stored in a file with a name that, like the name of a program source file, has two parts: a file name and a file suffix. Usually the file name is the same as the class defined in the header. The suffix usually is .h, but some programmers use .H, .hpp, or .hxx. Compilers usually aren't picky about header file names, but IDEs sometimes are. We'll assume that our class is defined in a file named Sales_item.h. Operations on Sales_item Objects

毕业论文 外文翻译#(精选.)

毕业论文(设计)外文翻译 题目:中国上市公司偏好股权融资:非制度性因素 系部名称:经济管理系专业班级:会计082班 学生姓名:任民学号: 200880444228 指导教师:冯银波教师职称:讲师 年月日

译文: 中国上市公司偏好股权融资:非制度性因素 国际商业管理杂志 2009.10 摘要:本文把重点集中于中国上市公司的融资活动,运用西方融资理论,从非制度性因素方面,如融资成本、企业资产类型和质量、盈利能力、行业因素、股权结构因素、财务管理水平和社会文化,分析了中国上市公司倾向于股权融资的原因,并得出结论,股权融资偏好是上市公司根据中国融资环境的一种合理的选择。最后,针对公司的股权融资偏好提出了一些简明的建议。 关键词:股权融资,非制度性因素,融资成本 一、前言 中国上市公司偏好于股权融资,根据中国证券报的数据显示,1997年上市公司在资本市场的融资金额为95.87亿美元,其中股票融资的比例是72.5%,,在1998年和1999年比例分别为72.6%和72.3%,另一方面,债券融资的比例分别是17.8%,24.9%和25.1%。在这三年,股票融资的比例,在比中国发达的资本市场中却在下跌。以美国为例,当美国企业需要的资金在资本市场上,于股权融资相比他们宁愿选择债券融资。统计数据显示,从1970年到1985年,美日企业债券融资占了境外融资的91.7%,比股权融资高很多。阎达五等发现,大约中国3/4的上市公司偏好于股权融资。许多研究的学者认为,上市公司按以下顺序进行外部融资:第一个是股票基金,第二个是可转换债券,三是短期债务,最后一个是长期负债。许多研究人员通常分析我国上市公司偏好股权是由于我们国家的经济改革所带来的制度性因素。他们认为,上市公司的融资活动违背了西方古典融资理论只是因为那些制度性原因。例如,优序融资理论认为,当企业需要资金时,他们首先应该转向内部资金(折旧和留存收益),然后再进行债权融资,最后的选择是股票融资。在这篇文章中,笔者认为,这是因为具体的金融环境激活了企业的这种偏好,并结合了非制度性因素和西方金融理论,尝试解释股权融资偏好的原因。

电子信息工程专业毕业论文外文翻译中英文对照翻译

本科毕业设计(论文)中英文对照翻译 院(系部)电气工程与自动化 专业名称电子信息工程 年级班级 04级7班 学生姓名 指导老师

Infrared Remote Control System Abstract Red outside data correspondence the technique be currently within the scope of world drive extensive usage of a kind of wireless conjunction technique,drive numerous hardware and software platform support. Red outside the transceiver product have cost low, small scaled turn, the baud rate be quick, point to point SSL, be free from electromagnetism thousand Raos etc.characteristics, can realization information at dissimilarity of the product fast, convenience, safely exchange and transmission, at short distance wireless deliver aspect to own very obvious of advantage.Along with red outside the data deliver a technique more and more mature, the cost descend, red outside the transceiver necessarily will get at the short distance communication realm more extensive of application. The purpose that design this system is transmit cu stomer’s operation information with infrared rays for transmit media, then demodulate original signal with receive circuit. It use coding chip to modulate signal and use decoding chip to demodulate signal. The coding chip is PT2262 and decoding chip is PT2272. Both chips are made in Taiwan. Main work principle is that we provide to input the information for the PT2262 with coding keyboard. The input information was coded by PT2262 and loading to high frequent load wave whose frequent is 38 kHz, then modulate infrared transmit dioxide and radiate space outside when it attian enough power. The receive circuit receive the signal and demodulate original information. The original signal was decoded by PT2272, so as to drive some circuit to accomplish

相关主题
文本预览
相关文档 最新文档