VLDB Journal manuscript No. (will be inserted by the editor) On Automating Web Services Dis
- 格式:pdf
- 大小:681.34 KB
- 文档页数:26
proceedings of the vldb endowment 几类VLDB Endowment publishes research papers in various categories, including:1. Core Database Technology: This category includes papers that focus on fundamental database techniques, such as query processing and optimization, concurrency control, index structures, data modeling, data storage, and data retrieval.2. Data Management in the Cloud and Distributed Systems: This category includes papers that address the unique challenges of managing data in distributed systems, including topics like data replication, consistency models, distributed query processing, fault tolerance, and scalability.3. Data Mining and Knowledge Discovery: This category includes papers that explore techniques for discovering patterns, trends, and insights from large datasets, including topics like data clustering, classification, regression, association rule mining, and anomaly detection.4. Information Extraction and Retrieval: This category includes papers that focus on techniques for extracting structured information from unstructured or semi-structured data, as well as methods for efficient indexing and retrieval of information from large textual datasets.5. Graph Data Management and Mining: This category includes papers that deal with the management and mining of graph-structured data, including topics like graph algorithms, graphquerying, graph summarization, and graph-based machine learning.6. Sensor Systems and Internet of Things: This category includes papers that address the challenges of managing and analyzing data from sensor networks or IoT devices, including topics like data streaming, event processing, sensor data fusion, and anomaly detection in IoT environments.7. Data Visualization and Exploratory Data Analysis: This category includes papers that focus on techniques for visually analyzing and exploring large datasets, including topics like interactive data visualization, visual analytics, and visual storytelling.8. Privacy, Security, and Ethics in Data Management: This category includes papers that discuss the challenges and solutions related to privacy-preserving and secure data management, as well as the ethical implications of collecting, analyzing, and sharing data.These categories are not exhaustive, and other related topics are also covered in the proceedings of the VLDB Endowment.。
论文投稿邮件,模板篇一:科学杂志投稿邮件格式尊敬的xx先生/女士:您好!本人受所有作者委托,在此提交完整的论文《低铝硅比铝土矿降铁、脱硅试验研究》,希望能够在《化工矿物与加工》杂志发表,并且代表所有作者郑重申明:(1)论文成果属于原创,享有自主知识产权,不涉及保密问题;(2)相关内容未曾以任何语种在国内外公开发表过,没有一稿多投行为;(3)今后关于论文内容及作者的任何修改,均由本人负责通知其他作者知晓。
本人对上述各项负完全责任。
论文的主要内容简介:以某地铝土矿为研究对象,为了实现低铝硅比铝土矿降铁、降硅的目的,通过试验研究确定采用磁选和浮选联合工艺流程,试验研究获得较好的工艺指标。
原矿铝硅比为,磁选和浮选联合工艺流程获得的选矿指标为铝土矿精矿铝硅比为,Al2O3的回收率为%,精矿中铁的含量为%。
本论文的创新点在于:采用磁选和浮选联合工艺流程,试验研究获得较好的工艺指标,实现低铝硅比铝土矿降铁、降硅的目的非常感谢您审阅本论文,期待早日收到专家的审查意见。
若对于本论文有任何疑问,请及时与我联系。
遵照《中华人民共和国著作权法》,作者同意将该文版权(含各种介质的版权)转让给《化工矿物与加工》xx部。
xx部一次性向作者付清稿酬。
此致敬礼!投稿人:xx身份证号:所有作者姓名:xx ,出生年月:1990-06-20 ,学历:硕士生,研究方向:矿物分选,联系电话 xx,E-mail:xx篇二:投稿信投稿信尊敬的xx先生/女士:您好!本人受所有作者委托,在此提交完整的论文《基于改进粒子群优化算法和混沌神经网络的电力系统负荷预测》,希望能够在《电网技术》杂志发表,并且代表所有作者郑重申明:(1)关于该论文,所有作者均已通读并同意投往贵刊,对作者排序没有异议,不存在利益冲突及署名纠纷;(2)论文成果属于原创,享有自主知识产权,不涉及保密问题;(3)相关内容未曾以任何语种在国内外公开发表过,没有一稿多投行为;(4)今后关于论文内容及作者的任何修改,均由本人负责通知其他作者知晓。
sci文章接受感谢信篇一:Sci感谢信你真的会写sci修改的回复信吗?(转载)文章修回一般都要回复审稿人,而一篇文章能不能接收,与如何回复审稿人有着极为重要的关系。
好的回复是文章接收的重要砝码,而不恰当的回复轻则可能导致再次修改,从而拖延发稿时间;重则导致文章被拒,前功尽弃。
那么如何进行精简全面的回复审稿人呢,请您往下看:1:回复信开头,(1)若给我们的修回邮件中有主编名字则要写上其名字:deardr.**,以示尊重;若没有编辑个人信息则可以写为:deareditor等。
(2)文章编号即文章的id,要与邮件中显示的编号一致,也可以去投稿系统中查找。
因为编辑每天要处理许多稿件,回复信中这样写可以让编辑和审稿人清楚的知道我们是哪篇文章,正所谓与人方便,与己方便。
(3)回复信开头一般要先说明稿件中如何进行的修改,例如在文中以加粗,红色字或者ms模式修订等标亮;然后感谢审稿人及编辑的审阅及评价;另外,还需要在该部分体现出文章题目(最好加粗或者斜体等方式标亮);最后是客套话,如我们还想继续提交到您的****杂志,希望您再帮忙审阅等等。
要注意语言一定表达中肯、谦逊,语言流畅。
2.接下来,就可以对审稿人的意见一一回复,主要包括各审稿人的各个问题,还有我们的一一感谢,及修改方式。
这里,需要注意的是:(2)每一条意见回复前最好先对审稿人表示感谢,礼多人不怪,为了顺利发表,多写几个字又何妨呢;(3)能准确完整的补充审稿人提出的要求;不能补充的资料能够引用文献充分的论证,有理有据;(4)详细的列出修改的内容;(5)语气委婉,谦逊;(6)能够随机应变,回复审稿人的意见最好恰到好处。
3.第三部分就是信件的结束语。
这里无非还是再次感谢,最后就是yoursincerely及通讯作者的署名了。
回复审稿人信件犹如点睛之笔,如果回复不合适就可能是弄巧成拙,所以,有疑问的亲们,赶紧提问吧,我们会一一接招解答。
篇二:写作sci论文投稿回复信写作sci论文投稿回复信1.感谢到底原则2.毛举缕析原则具体说来就是要达到:审稿人不需要看修改稿,只需看回复信即可清楚了解作者几乎所有的修改,包括修改后的图、表、加入的参考文献,每个修改的前后句子及其所属的页、段及行数。
Call for PapersDear author,This is Journal of Energy and Power Engineering (ISSN1934-8975), created in Dec. 2007. We are interested in various kinds of professional papers about energy and power. The Journal of Energy and Power Engineering, a monthly professional academic journal, is striving to provide the best platform for researchers and scholars worldwide to exchange their latest findings and results.Current Columns InvolveThermal Science; Fluid Mechanics; Energy and Environment; Power System and Automation; Power Electronic; High Voltage and Pulse Power; Sustainable Energy.Information for Authors1. The manuscript should be original, and has not been published previously. Please don’t submit material that is currently being considered by another journal.2. Manuscripts may be 3000-8000 words or longer if approved by the editor, including abstract, text, tables, footnotes, appendixes, and references. The title should not be exceeding 15 words, and abstract should not be exceeding 150 words. 3-8 keywords required.3. The manuscript should be in MS Word format, submitted as an email attachment to our email address.Editorial ProceduresAll papers considered appropriate for this journal are reviewed anonymously by at least two outside reviewers. The review process usually takes 4-5 weeks. Papers are accepted for publication subject to no substantive, stylistic editing. The Editor reserves the right to make any necessary changes in the papers, or request the author to do so, or reject the paper submitted. A copy of the edited paper along with the first proofs will be sent to the author for proofreading. They should be corrected and returned to the Editor within seven days. Once the final version of the paper has been accepted, authors are requested not to make further changes to the text.Submitting of ManuscriptsAll manuscripts submitted will be considered for publication. Please visit our website at for our automatic paper submission systems or as an email attachment ****************************,**************************,*************************** **************************.Journal of Energy and Power EngineeringDavid Publishing CompanyTel: 001-323-984-7526, 001-323-410-1082; Fax: 001-323-984-7374。
LEOPOLD-FRANZENS UNIVERSITYChair of Engineering Mechanicso.Univ.-Prof.Dr.-Ing.habil.G.I.Schu¨e ller,Ph.D.G.I.Schueller@uibk.ac.at Technikerstrasse13,A-6020Innsbruck,Austria,EU Tel.:+435125076841Fax.:+435125072905 mechanik@uibk.ac.at,http://mechanik.uibk.ac.atIfM-Publication2-407G.I.Schu¨e ller.Developments in stochastic structural mechanics.Archive of Applied Mechanics,published online,2006.Archive of Applied Mechanics manuscript No.(will be inserted by the editor)Developments in Stochastic Structural MechanicsG.I.Schu¨e llerInstitute of Engineering Mechanics,Leopold-Franzens University,Innsbruck,Aus-tria,EUReceived:date/Revised version:dateAbstract Uncertainties are a central element in structural analysis and design.But even today they are frequently dealt with in an intuitive or qualitative way only.However,as already suggested80years ago,these uncertainties may be quantified by statistical and stochastic procedures. In this contribution it is attempted to shed light on some of the recent advances in the now establishedfield of stochastic structural mechanics and also solicit ideas on possible future developments.1IntroductionThe realistic modeling of structures and the expected loading conditions as well as the mechanisms of their possible deterioration with time are un-doubtedly one of the major goals of structural and engineering mechanics2G.I.Schu¨e ller respectively.It has been recognized that this should also include the quan-titative consideration of the statistical uncertainties of the models and the parameters involved[56].There is also a general agreement that probabilis-tic methods should be strongly rooted in the basic theories of structural en-gineering and engineering mechanics and hence represent the natural next step in the development of thesefields.It is well known that modern methods leading to a quantification of un-certainties of stochastic systems require computational procedures.The de-velopment of these procedures goes in line with the computational methods in current traditional(deterministic)analysis for the solution of problems required by the engineering practice,where certainly computational pro-cedures dominate.Hence,their further development within computational stochastic structural analysis is a most important requirement for dissemi-nation of stochastic concepts into engineering practice.Most naturally,pro-cedures to deal with stochastic systems are computationally considerably more involved than their deterministic counterparts,because the parameter set assumes a(finite or infinite)number of values in contrast to a single point in the parameter space.Hence,in order to be competitive and tractable in practical applications,the computational efficiency of procedures utilized is a crucial issue.Its significance should not be underestimated.Improvements on efficiency can be attributed to two main factors,i.e.by improved hard-ware in terms of ever faster computers and improved software,which means to improve the efficiency of computational algorithms,which also includesDevelopments in Stochastic Structural Mechanics3 utilizing parallel processing and computer farming respectively.For a con-tinuous increase of their efficiency by software developments,computational procedure of stochastic analysis should follow a similar way as it was gone in the seventieth and eighties developing the deterministic FE approach. One important aspect in this fast development was the focus on numerical methods adjusted to the strength and weakness of numerical computational algorithms.In other words,traditional ways of structural analysis devel-oped before the computer age have been dropped,redesigned and adjusted respectively to meet the new requirements posed by the computational fa-cilities.Two main streams of computational procedures in Stochastic Structural Analysis can be observed.Thefirst of this main class is the generation of sample functions by Monte Carlo simulation(MCS).These procedures might be categorized further according to their purpose:–Realizations of prescribed statistical information:samples must be com-patible with prescribed stochastic information such as spectral density, correlation,distribution,etc.,applications are:(1)Unconditional simula-tion of stochastic processes,fields and waves.(2)Conditional simulation compatible with observations and a priori statistical information.–Assessment of the stochastic response for a mathematical model with prescribed statistics(random loading/system parameters)of the param-eters,applications are:(1)Representative sample for the estimation of the overall distribution.4G.I.Schu¨e ller Indiscriminate(blind)generation of samples.Numerical integration of SDE’s.(2)Representative sample for the reliability assessment by gen-erating adverse rare events with positive probability,i.e.by:(a)variance reduction techniques controlling the realizations of RV’s,(b)controlling the evolution in time of sampling functions.The other main class provides numerical solutions to analytical proce-dures.Grouping again according to the respective purpose the following classification can be made:Numerical solutions of Kolmogorov equations(Galerkin’s method,Finite El-ement method,Path Integral method),Moment Closure Schemes,Compu-tation of the Evolution of Moments,Maximum Entropy Procedures,Asymp-totic Stability of Diffusion Processes.In the following,some of the outlined topics will be addressed stressing new developments.These topics are described within the next six subject areas,each focusing on a different issue,i.e.representation of stochastic processes andfields,structural response,stochastic FE methods and parallel processing,structural reliability and optimization,and stochastic dynamics. In this context it should be mentioned that aside from the MIT-Conference series the USNCCM,ECCM and WCCM’s do have a larger part of sessions addressing computational stochastic issues.Developments in Stochastic Structural Mechanics5 2Representation of Stochastic ProcessesMany quantities involving randomfluctuations in time and space might be adequately described by stochastic processes,fields and waves.Typical ex-amples of engineering interest are earthquake ground motion,sea waves, wind turbulence,road roughness,imperfection of shells,fluctuating prop-erties in random media,etc.For this setup,probabilistic characteristics of the process are known from various measurements and investigations in the past.In structural engineering,the available probabilistic characteristics of random quantities affecting the loading or the mechanical system can be often not utilized directly to account for the randomness of the structural response due to its complexity.For example,in the common case of strong earthquake motion,the structural response will be in general non-linear and it might be too difficult to compute the probabilistic characteristics of the response by other means than Monte Carlo simulation.For the purpose of Monte Carlo simulation sample functions of the involved stochastic pro-cess must be generated.These sample functions should represent accurately the characteristics of the underlying stochastic process orfields and might be stationary and non-stationary,homogeneous or non-homogeneous,one-dimensional or multi-dimensional,uni-variate or multi-variate,Gaussian or non-Gaussian,depending very much on the requirements of accuracy of re-alistic representation of the physical behavior and on the available statistical data.6G.I.Schu¨e ller The main requirement on the sample function is its accurate represen-tation of the available stochastic information of the process.The associ-ated mathematical model can be selected in any convenient manner as long it reproduces the required stochastic properties.Therefore,quite different representations have been developed and might be utilized for this purpose. The most common representations are e.g.:ARMA and AR models,Filtered White Noise(SDE),Shot Noise and Filtered Poisson White Noise,Covari-ance Decomposition,Karhunen-Lo`e ve and Polynomial Chaos Expansion, Spectral Representation,Wavelets Representation.Among the various methods listed above,the spectral representation methods appear to be most widely used(see e.g.[71,86]).According to this procedure,samples with specified power spectral density information are generated.For the stationary or homogeneous case the Fast Fourier Transform(FFT)techniques is utilized for a dramatic improvements of its computational efficiency(see e.g.[104,105]).Advances in thisfield provide efficient procedures for the generation of2D and3D homogeneous Gaus-sian stochasticfields using the FFT technique(see e.g.[87]).The spectral representation method generates ergodic sample functions of which each ful-fills exactly the requirements of a target power spectrum.These procedures can be extended to the non-stationary case,to the generation of stochastic waves and to incorporate non-Gaussian stochasticfields by a memoryless nonlinear transformation together with an iterative procedure to meet the target spectral density.Developments in Stochastic Structural Mechanics7 The above spectral representation procedures for an unconditional simula-tion of stochastic processes andfields can also be extended for Conditional simulations techniques for Gaussianfields(see e.g.[43,44])employing the conditional probability density method.The aim of this procedure is the generation of Gaussian random variates U n under the condition that(n−1) realizations u i of U i,i=1,2,...,(n−1)are specified and the a priori known covariances are satisfied.An alternative procedure is based on the so called Kriging method used in geostatistical application and applied also to con-ditional simulation problems in earthquake engineering(see e.g.[98]).The Kriging method has been improved significantly(see e.g.[36])that has made this method theoretically clearer and computationally more efficient.The differences and similarities of the conditional probability density methods and(modified)Kriging methods are discussed in[37]showing the equiva-lence of both procedures if the process is Gaussian with zero mean.A quite general spectral representation utilized for Gaussian random pro-cesses andfields is the Karhunen-Lo`e ve expansion of the covariance function (see e.g.[54,33]).This representation is applicable for stationary(homoge-neous)as well as for non-stationary(inhomogeneous)stochastic processes (fields).The expansion of a stochastic process(field)u(x,θ)takes the formu(x,θ)=¯u(x)+∞i=1ξ(θ) λiφi(x)(1)where the symbolθindicates the random nature of the corresponding quan-tity and where¯u(x)denotes the mean,φi(x)are the eigenfunctions andλi the eigenvalues of the covariance function.The set{ξi(θ)}forms a set of8G.I.Schu¨e ller orthogonal(uncorrelated)zero mean random variables with unit variance.The Karhunen-Lo`e ve expansion is mean square convergent irrespective of its probabilistic nature provided it possesses afinite variance.For the im-portant special case of a Gaussian process orfield the random variables{ξi(θ)}are independent standard normal random variables.In many prac-tical applications where the random quantities vary smoothly with respectto time or space,only few terms are necessary to capture the major part of the randomfluctuation of the process.Its major advantage is the reduction from a large number of correlated random variables to few most important uncorrelated ones.Hence this representation is especially suitable for band limited colored excitation and stochastic FE representation of random me-dia where random variables are usually strongly correlated.It might also be utilized to represent the correlated stochastic response of MDOF-systems by few most important variables and hence achieving a space reduction.A generalization of the above Karhunen-Lo`e ve expansion has been proposed for application where the covariance function is not known a priori(see[16, 33,32]).The stochastic process(field)u(x,θ)takes the formu(x,θ)=a0(x)Γ0+∞i1=1a i1(x)Γ1(ξi1(θ))+∞i1=1i1i2=1a i1i2(x)Γ2(ξi1(θ),ξi2(θ))+ (2)which is denoted as the Polynomial Chaos Expansion.Introducing a one-to-one mapping to a set with ordered indices{Ψi(θ)}and truncating eqn.2Developments in Stochastic Structural Mechanics9 after the p th term,the above representations reads,u(x,θ)=pj=ou j(x)Ψj(θ)(3)where the symbolΓn(ξi1,...,ξin)denotes the Polynomial Chaos of order nin the independent standard normal random variables.These polynomialsare orthogonal so that the expectation(or inner product)<ΨiΨj>=δij beingδij the Kronecker symbol.For the special case of a Gaussian random process the above representation coincides with the Karhunen-Lo`e ve expan-sion.The Polynomial Chaos expansion is adjustable in two ways:Increasingthe number of random variables{ξi}results in a refinement of the random fluctuations,while an increase of the maximum order of the polynomialcaptures non-linear(non-Gaussian)behavior of the process.However,the relation between accuracy and numerical efforts,still remains to be shown. The spectral representation by Fourier analysis is not well suited to describe local feature in the time or space domain.This disadvantage is overcome in wavelets analysis which provides an alternative of breaking a signal down into its constituent parts.For more details on this approach,it is referred to[24,60].In some cases of applications the physics or data might be inconsistent with the Gaussian distribution.For such cases,non-Gaussian models have been developed employing various concepts to meet the desired target dis-tribution as well as the target correlation structure(spectral density).Cer-tainly the most straight forward procedures is the above mentioned memo-ryless non-linear transformation of Gaussian processes utilizing the spectralrepresentation.An alternative approach utilizes linear and non-linearfil-ters to represent normal and non-Gaussian processes andfields excited by Gaussian white noise.Linearfilters excited by polynomial forms of Poisson white noise have been developed in[59]and[34].These procedures allow the evaluation of moments of arbitrary order without having to resort to closure techniques. Non-linearfilters are utilized to generate a stationary non-Gaussian stochas-tic process in agreement with a givenfirst-order probability density function and the spectral density[48,15].In the Kontorovich-Lyandres procedure as used in[48],the drift and diffusion coefficients are selected such that the solutionfits the target probability density,and the parameters in the solu-tion form are then adjusted to approximate the target spectral density.The approach by Cai and Lin[15]simplifies this procedure by matching the spec-tral density by adjusting only the drift coefficients,which is the followed by adjusting the diffusion coefficient to approximate the distribution of the pro-cess.The latter approach is especially suitable and computationally highly efficient for a long term simulation of stationary stochastic processes since the computational expense increases only linearly with the number n of dis-crete sample points while the spectral approach has a growth rate of n ln n when applying the efficient FFT technique.For generating samples of the non-linearfilter represented by a stochastic differential equations(SDE), well developed numerical procedures are available(see e.g.[47]).3Response of Stochastic SystemsThe assessment of the stochastic response is the main theme in stochastic mechanics.Contrary to the representation of of stochastic processes and fields designed tofit available statistical data and information,the output of the mathematical model is not prescribed and needs to be determined in some stochastic sense.Hence the mathematical model can not be selected freely but is specified a priori.The model involves for stochastic systems ei-ther random system parameters or/and random loading.Please note,due to space limitations,the question of model validation cannot be treated here. For the characterization of available numerical procedures some classifi-cations with regard to the structural model,loading and the description of the stochastic response is most instrumental.Concerning the structural model,a distinction between the properties,i.e.whether it is determinis-tic or stochastic,linear or non-linear,as well as the number of degrees of freedom(DOF)involved,is essential.As a criterion for the feasibility of a particular numerical procedure,the number of DOF’s of the structural system is one of the most crucial parameters.Therefore,a distinction be-tween dynamical-system-models and general FE-discretizations is suggested where dynamical systems are associated with a low state space dimension of the structural model.FE-discretization has no essential restriction re-garding its number of DOF’s.The stochastic loading can be grouped into static and dynamic loading.Stochastic dynamic loading might be charac-terized further by its distribution and correlation and its independence ordependence on the response,resulting in categorization such as Gaussian and non-Gaussian,stationary and non-stationary,white noise or colored, additive and multiplicative(parametric)excitation properties.Apart from the mathematical model,the required terms in which the stochastic re-sponse should be evaluated play an essential role ranging from assessing thefirst two moments of the response to reliability assessments and stabil-ity analysis.The large number of possibilities for evaluating the stochas-tic response as outlined above does not allow for a discussion of the en-tire subject.Therefore only some selected advances and new directions will be addressed.As already mentioned above,one could distinguish between two main categories of computational procedures treating the response of stochastic systems.Thefirst is based on Monte Carlo simulation and the second provides numerical solutions of analytical procedures for obtaining quantitative results.Regarding the numerical solutions of analytical proce-dures,a clear distinction between dynamical-system-models and FE-models should be made.Current research efforts in stochastic dynamics focus to a large extent on dynamical-system-models while there are few new numerical approaches concerning the evaluation of the stochastic dynamic response of e.g.FE-models.Numerical solutions of the Kolmogorov equations are typical examples of belonging to dynamical-system-models where available approaches are computationally feasible only for state space dimensions one to three and in exceptional cases for dimension four.Galerkin’s,Finite El-ement(FE)and Path Integral methods respectively are generally used tosolve numerically the forward(Fokker-Planck)and backward Kolmogorov equations.For example,in[8,92]the FE approach is employed for stationary and transient solutions respectively of the mentioned forward and backward equations for second order systems.First passage probabilities have been ob-tained employing a Petrov-Galerkin FE method to solve the backward and the related Pontryagin-Vitt equations.An instructive comparison between the computational efforts using Monte Carlo simulation and the FE-method is given e.g.in an earlier IASSAR report[85].The Path Integral method follows the evolution of the(transition)prob-ability function over short time intervals,exploiting the fact that short time transition probabilities for normal white noise excitations are locally Gaus-sian distributed.All existing path integration procedures utilize certain in-terpolation schemes where the probability density function(PDF)is rep-resented by values at discrete grid points.In a wider sense,cell mapping methods(see e.g.[38,39])can be regarded as special setups of the path integral procedure.As documented in[9],cumulant neglect closure described in section7.3 has been automated putational procedures for the automated generation and solutions of the closed set of moment equations have been developed.The method can be employed for an arbitrary number of states and closed at arbitrary levels.The approach,however,is limited by available computational resources,since the computational cost grows exponentially with respect to the number of states and the selected closurelevel.The above discussed developments of numerical procedures deal with low dimensional dynamical systems which are employed for investigating strong non-linear behavior subjected to(Gaussian)white noise excitation. Although dynamical system formulations are quite general and extendible to treat non-Gaussian and colored(filtered)excitation of larger systems,the computational expense is growing exponentially rendering most numerical approaches unfeasible for larger systems.This so called”curse of dimen-sionality”is not overcome yet and it is questionable whether it ever will be, despite the fast developing computational possibilities.For this reason,the alternative approach based on Monte Carlo simu-lation(MCS)gains importance.Several aspects favor procedures based on MCS in engineering applications:(1)Considerably smaller growth rate of the computational effort with dimensionality than analytical procedures.(2) Generally applicable,well suited for parallel processing(see section5.1)and computationally straight forward.(3)Non-linear complex behavior does not complicate the basic procedure.(4)Manageable for complex systems.Contrary to numerical solutions of analytical procedures,the employed structural model and the type of stochastic loading does for MCS not play a deceive role.For this reason,MCS procedures might be structured ac-cording to their purpose i.e.where sample functions are generated either for the estimation of the overall distribution or for generating rare adverse events for an efficient reliability assessment.In the former case,the prob-ability space is covered uniformly by an indiscriminate(blind)generationof sample functions representing the random quantities.Basically,at set of random variables will be generated by a pseudo random number generator followed by a deterministic structural analysis.Based on generated random numbers realizations of random processes,fields and waves addressed in section2,are constructed and utilized without any further modification in the following structural analysis.The situation may not be considered to be straight forward,however,in case of a discriminate MCS for the reliability estimation of structures,where rare events contributing considerably to the failure probability should be gener-ated.Since the effectiveness of direct indiscriminate MCS is not satisfactory for producing a statistically relevant number of low probability realizations in the failure domain,the generation of samples is restricted or guided in some way.The most important class are the variance reduction techniques which operate on the probability of realizations of random variables.The most widely used representative of this class in structural reliability assess-ment is Importance Sampling where a suitable sampling distribution con-trols the generation of realizations in the probability space.The challenge in Importance Sampling is the construction of a suitable sampling distribu-tion which depends in general on the specific structural system and on the failure domain(see e.g.[84]).Hence,the generation of sample functions is no longer independent from the structural system and failure criterion as for indiscriminate direct MCS.Due to these dependencies,computational procedures for an automated establishment of sampling distributions areurgently needed.Adaptive numerical strategies utilizing Importance Direc-tional sampling(e.g.[11])are steps in this direction.The effectiveness of the Importance sampling approach depends crucially on the complexity of the system response as well as an the number of random variables(see also section5.2).Static problems(linear and nonlinear)with few random vari-ables might be treated effectively by this approach.Linear systems where the randomness is represented by a large number of RVs can also be treated efficiently employingfirst order reliability methods(see e.g.[27]).This ap-proach,however,is questionable for the case of non-linear stochastic dynam-ics involving a large set of random variables,where the computational effort required for establishing a suitable sampling distribution might exceed the effort needed for indiscriminate direct MCS.Instead of controlling the realization of random variables,alternatively the evolution of the generated sampling can be controlled[68].This ap-proach is limited to stochastic processes andfields with Markovian prop-erties and utilizes an evolutionary programming technique for the genera-tion of more”important”realization in the low probability domain.This approach is especially suitable for white noise excitation and non-linear systems where Importance sampling is rather difficult to apply.Although the approach cannot deal with spectral representations of the stochastic processes,it is capable to make use of linearly and non-linearlyfiltered ex-citation.Again,this is just contrary to Importance sampling which can be applied to spectral representations but not to white noisefiltered excitation.4Stochastic Finite ElementsAs its name suggests,Stochastic Finite Elements are structural models rep-resented by Finite Elements the properties of which involve randomness.In static analysis,the stiffness matrix might be random due to unpredictable variation of some material properties,random coupling strength between structural components,uncertain boundary conditions,etc.For buckling analysis,shape imperfections of the structures have an additional impor-tant effect on the buckling load[76].Considering structural dynamics,in addition to the stiffness matrix,the damping properties and sometimes also the mass matrix might not be predictable with certainty.Discussing numerical Stochastic Finite Elements procedures,two cat-egories should be distinguished clearly.Thefirst is the representation of Stochastic Finite Elements and their global assemblage as random structural matrices.The second category addresses the evaluation of the stochastic re-sponse of the FE-model due to its randomness.Focusingfirst on the Stochastic FE representation,several representa-tions such as the midpoint method[35],the interpolation method[53],the local average method[97],as well as the Weighted-Integral-Method[94,25, 26]have been developed to describe spatial randomfluctuations within the element.As a tendency,the midpoint methods leads to an overestimation of the variance of the response,the local average method to an underestima-tion and the Weighted-Integral-Method leads to the most accurate results. Moreover,the so called mesh-size problem can be resolved utilizing thisrepresentation.After assembling all Finite Elements,the random structural stiffness matrix K,taken as representative example,assumes the form,K(α)=¯K+ni=1K Iiαi+ni=1nj=1K IIijαiαj+ (4)where¯K is the mean of the matrix,K I i and K II ij denote the determinis-ticfirst and second rate of change with respect to the zero mean random variablesαi andαj and n is the total number of random variables.For normally distributed sets of random variables{α},the correlated set can be represented advantageously by the Karhunen-Lo`e ve expansion[33]and for non-Gaussian distributed random variables by its Polynomial chaos ex-pansion[32],K(θ)=¯K+Mi=0ˆKiΨi(θ)(5)where M denotes the total number of chaos polynomials,ˆK i the associated deterministicfluctuation of the matrix andΨi(θ)a polynomial of standard normal random variablesξj(θ)whereθindicates the random nature of the associated variable.In a second step,the random response of the stochastic structural system is determined.The most widely used procedure for evaluating the stochastic response is the well established perturbation approach(see e.g.[53]).It is well adapted to the FE-formulation and capable to evaluatefirst and second moment properties of the response in an efficient manner.The approach, however,is justified only for small deviations from the center value.Since this assumption is satisfied in most practical applications,the obtainedfirst two moment properties are evaluated satisfactorily.However,the tails of the。
directly submit your work to thousands of different journals .How to get started?There are three ways to get started with Manuscripts.Select one of our built-in templates . Manuscripts comes with over 1,100verified templates, as well as unverified templates based on CSL styles.We are building literally thousands of them for many types of documents,including for journals across all sciences, university specific PhD theses,grant applications etc.Import an existing document. Manuscript supports multiple importformats such as MS Word, LaTeX, Markdown and many more.Create a new blank manuscript.Selecting a built-in templateChoose ‘Create new manuscript’ from the welcome screen or if do not have thewelcome screen available, go to File > New > Manuscript with Template…Figure 1:Template selector1. 2. 3.Browse or search for available templates. Manuscript-verified templates are indicated by a badge icon as shown in Figure 1. Once you have found and selected the template you need, click Choose and a manuscript template is opened for you.Figure 2:New manuscript from templateI cannot see a template for the document I would like to write, what should I do?In case you cannot find the template you are looking for, do not worry, you still have several options to get started.If you are looking to get started with a generic template that consists of an abstract, introduction, methods, results and discussion, you can choose a different journal template (e.g. Genome Biology Research Article) and modify the template to your needs. Remember that the template is only there to help you get started and to provide you with guidance. However, you are still the master of your own manuscript.If you rather get started with a blank canvas, choose Create Empty Manuscript option from the bottom of the template selector and you are all set to write your firstdocument with Manuscripts.You can contact us at support@ with a link to thejournal/template style you would like Manuscripts to support and we will do our best to add your requested template to Manuscripts.I can find a template but it does not have the ‘Manuscript verified’ badge next to it.We have added over 7,000 journal CSL styles that you can choose to get started from to create a template of your own. If you find a journal template without a badge next to the journal name it means that we have not verified the template for that journal yet.You can still choose the journal template as your starting point. When you select the template a pop up window appears where you can see and edit basic templatemetadata such as which sections it should contain, which font and font sizes will be used, and so on. Once you are ready , click Create Manuscript .Import an existing document to ManuscriptsIf you already have a draft in another document format and you would like to continue writing it with Manuscripts, you can easily import the document to Manuscripts and continue writing your draft with ease.Create a new empty manuscript from File > New > Manuscript.Go to File > Import > Content ..You can see all the supported document formats that can be imported toManuscripts by clicking ‘Options’ in the file browser window.Choose the document that you want to import to Manuscripts, click Openand the document is opened as a new manuscripts file.1.2.3. 4.Figure 3:Import an existing document to ManuscriptsHow to reorder manuscript sectionsThe manuscript outline lets you not just see and navigate an outline of your manuscript project, but also manipulate the structure of it in powerful ways. Simply drag and drop to reorder sections or paragraphs to change the flow of your document.How to use the focus mode in ManuscriptsMaintaining focus is essential to productive writing. One of the ways we help you focus in your work is the so-called focus mode. You can activate the focus mode in two ways.A. Click View > Selected Sections in the main menu.B. Click the focus button in the manuscript outline, visible when you hover on an individual section in the outline on its right edge.To toggle back to viewing the entire manuscript, simplyA. Click View > View All Sections in the main menuorB. Click the focus button againA. Focus mode in action.B. Focus mode iconFigure 4:1. The manuscript editing window when you have activated the focus mode.2.The focus mode icon, visible when you hover on a section item.How to use the gutter menusThe Manuscripts editor includes so-called "gutter helpers", designated with the blue + and o symbols in the left margin for the currently active paragraph. These symbols are buttons that lead you to shortcut actions for inserting (the+symbols) and manipulating (the o) the document contents in a way that is contextual.The gutter insertion helper (+ symbol)Figure 5:How to create a new sectionThe gutter selection helper (o symbol)The following example shows what the gutter selection helper tool presents in the case of a paragraph.Figure 6:The gutter selection helper for a paragraph.The gutter helper is entirely optional and can be toggled on or off from the Editing Preferences by choosing "Show smart gutter in the left margin when editing”.Figure 7:Smart gutter can be enabled or disabled in the Editing PreferencesExporting a selected subset of a manuscriptIt often comes handy to place material in your manuscript project which is never intended to reach readers. For instance you may want to keep…Notes regarding your research or writing progress: material that helpsyou organise your writing, but is never intended for publication or forfeedback.1.Figure 8:Sharing the current selectionNote that if you have selected your bibliography as one of the items you want to export, only the subset of it that is cited in the selected part of your manuscript is included in the bibliography.As with sharing, if you have selected your bibliography as one of the items to export, only the subset of citations in the selected part of your manuscript get included in the bibliography.Spell and grammar checking in ManuscriptsManuscripts allows you to check your spelling and grammar as you type to avoid disasters like the one shown in Figure 10. Grammar and spellchecking can be accessed from Edit > Spelling and Grammar in the main menu.Figure 9:Enable spell and grammar checkCheck spelling and grammar manuallyTo check spelling and grammar manually, choose Edit >Spelling and Grammar > Show Spelling and Grammar (⌘:).Figure 10:Check spelling and grammar manuallyTo update the spelling and grammar analysis of your document after you have made some changes, choose Edit >Spelling and Grammar > Check Document Now (⌘;).Check spelling and grammar as you typeSpelling and grammar can also be checked as you type, by toggling on Edit > Spelling and Grammar > Check Spelling While Typing. Grammatical errors are presented in the familiar green, and spelling errors in red.Figure 11:Check spelling and grammar while typingCorrect spelling automaticallyYou can even make Manuscripts correct your spelling for you by choosing Edit > Spelling and Grammar > Correct Spelling Automatically.Choosing a citation styleYou can change the manuscript's citation style in the inspector available on the right hand side of the main window by clicking the Toggle Inspector button available inthe lower right corner of the application window.This reveals the inspector which includes two tabs, second of which is the styleinspector. At the bottom it includes the Manuscript Styles inspector palette (whereyou see the keyboard focus in the screenshot below). This inspector palette is where you can change the citation style.Figure 13:Changing the citation style in the InspectorHow to change the font family and size?You can change the font family and size and edit other properties of paragraph styles in the inspector on the right hand side of the main window by clicking the Toggle Inspector button (available in the lower right corner of the application window).Figure 14:Inspector toggle button iconThis reveals the inspector which has two tabs, second of which is the style inspector that includes the Paragraph Styles palette that let you choose font sizes, etc.Figure 15:Change font and font size in the Paragraph Styles section of the Inspector How to create a figure panelThe basic workflow for adding figure panels into your Manuscripts document involves first placing a figure placeholder into your document (which you can caption at this stage), and then adding an image as a second step.Importing manuscript content that includes images is another option, i.e. File > Import > Content. This is not further discussed separately here.Creating an empty figure panelThere are three different ways to create an empty figure. Each one of them requires you to first place the text insertion cursor next to the spot where you want to insert the figure. After that, you can either:Click the figure symbol in the toolbar.Use the menu item Insert > Figure in the main menu.Use the smart gutter menu to the left of the paragraph that is in focus, before or after which you intend to add the figure. Click the blue + symbol in it, andchoose “Insert Figure”.A. Figure panel iconB.Figure 16:How to create a figure panelThe figure placeholder you create will look something like the following:Figure 17:New empty figure panelAdding images to a figure panelYou can add images to a figure panel in three ways, starting from an empty placeholder image.1.By dragging and dropping into the figure + symbol.By clicking the figure + symbol and choosing Choose File to Import…By clicking the figure + symbol and choosing amongst orphaned figuresin the manuscript in case you have previously added and then removedfigures (orphaned figure images can be removed permanently by rightclicking the figure in the list of orphaned figures, and choosing Delete ).To replace a figure, simply either:Drag an image on top of the image you wish to replace.Alternatively , click on the image and drag in the popover that opens, oragain Choose File to Import…Creating a multi-panel figureYou can create a multi-panel figure in Manuscripts simply by:Figure 18:Creating a multi-panel figureFigure file formatsManuscripts always deals with image data in a non-destructive way . For instance figure panel images all internally still store the original full sized image even if any panel members need to be scaled. Similarly , Manuscripts keep vector formattedgraphics in their original form and only rasterises them to bitmap images on export if required (see below).2.3. 1.2.Figure 19:How to create a table in ManuscriptsThe table that is created looks like this:Figure 20:New tableTo add or remove rows or columns, right-click any of the columns or rows and choose the relevant option from the context menu.Figure 21:Editing a table in ManuscriptsHow to edit the table format in ManuscriptsYou can change and edit the table format with the table styles inspector available on the right hand side of the main window by clicking the Toggle Inspector button (located in the lower right corner of the application window):A. Inspector Toggle iconB. Table Styles in the InspectorFigure 22:Edit and change table styles in the InspectorHow to create an equationManuscript includes a powerful equation editor that allows you to insert beautiful equations using LaTeX.Creating an empty equationThere are three different ways to create an empty figure. Each one of them requires you to first place the text insertion cursor next to the spot where you want to insert the figure. After that, you can either:Click the equation symbol in the toolbar.Use the menu item Insert > Equation in the main menu.Use the smart gutter menu to the left of the paragraph that is in focus, before or after which you intend to add the figure. Click on the blue + symbol in it, and choose Insert Equation.A. Equation iconB.Figure 23:Creating an equationThe equation placeholder that is inserted to the manuscript looks like this:Figure 24:Equation placeholderHow to edit an equationTo edit an equation, click on the equation so that the equation editor pops up. Now you can type or paste your equation using LaTeX, the equation will be rendered in real time. If you are not familiar yet with the LaTeX syntax for equations you can find out more here: https:///wiki/LaTeX/Mathematics#SymbolsA.B.Figure 25:Editing an equation in ManuscriptsManuscript file versioning: backups & change trackingIn short, your Manuscripts documents are versioned for backup and rollback purposes right now. It will also form the basis for change tracking features we intend to build after 1.0.that allows you to cite papers without directly requiring to interact with an externalreference manager. This is what the Manuscripts citation tool looks like:Figure 26:Using the citation tool within ManuscriptsImporting bibliography data into ManuscriptsWe support importing references from all the major reference file formats (Endnote XML, RIS, BibTeX and more). All popular reference managers can export to at least one of these formats. To insert references, either:Choose File > Open in the main menu to open a bibliography file thatyou exported from your favorite reference manager.Choose File > Import in the main menu to import bibliography data intoyour currently open manuscript.Drag a bibliography file into the Manuscripts dock icon.Configuring a citation keyboard shortcutYou can configure the keyboard shortcut to use for the internal citation tool with the option available at Preferences > Editing > Citation Shortcut :1. 2. 3.Figure 27:Configuring a citation tool shortcut in Editing Preferences Enjoy writing your next best work with ManuscriptsWe hope you really enjoy writing with Manuscripts. Visit our forums for more information and up-to-date answers to questions you may have:. Please contact us viasupport@ if you would like to talk to us or hit issues with the app. We would love to hear what you think of Manuscripts and how we can make it even better. Enjoy!The Manuscripts team。
VLDB201440th International Conference on Very Large Data Bases, Hangzhou, ChinaProceedings of the Array VLDB EndowmentVolume 7, No. 11 – July 2014Proceedings of the 40th International Conference on Very Large Data Bases, Hangzhou, ChinaProgram Chairs and Editors-in-Chief:H. V. Jagadish, Aoying ZhouAssociate Editors – Research and Innovative Systems Tracks:Shivnath Babu, Lei Chen, Graham Cormode, Bin Cui, Wynne Hsu, Martin Kersten,Donald Kossman, Elke Rundensteiner, Kyuseok Shim, Wang-Chiew Tan, Letizia Tanca, Jeffrey YuAssociate Editors – Experiments and Analysis Track:Gao Cong, Jens DittrichAssociate Editors – Vision Track:Zachary IvesProceedings Chairs:Li Xiong, Cong YuPVLDB – Proceedings of the VLDB EndowmentVolume 7, No. 11, July 2014.The 40th International Conference on Very Large Data Bases, Hangzhou, China.Copyright 2014 VLDB EndowmentThis work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit /licenses/by-nc-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing *************.Volume 7, Number 11, July 2014: VLDB 2014Pages ii - ix and 931 - 1022ISSN 2150-8097Additional copies only online at: , /corr, and TABLE OF CONTENTSFront MatterCopyright Notice (ii)Table of Contents (iii)VLDB 2014 Organization and Review Board (iv)LettersFirst VLDB in Mainland China .................................................................................. L idan Shou ix Research PapersTrekking Through Siberia: Managing Cold Data in a Memory-Optimized Database (931)................................................................... Ahmed Eldawy, Justin Levandoski, Per-Åke Larson The Case for Personal Data-Driven Decision Making .. (943)....................................................................................................................... Jennie Duggan ConfluxDB: Multi-Master Replication for Partitioned Snapshot Isolation Databases . (947)......................................................... Prima Chairunnanda, Khuzaima Daudjee, M. Tamer Ozsu γ-DB: Managing scientific hypotheses as uncertain data .. (959)............................................................................................. Bernardo Goncalves, Fabio PortoIbex — An Intelligent Storage Engine with Support for Advanced SQL Off-loading (963).............................................................................. Louis Woods, Zsolt Istvan, Gustavo AlonsoNOMAD: Nonlocking, stOchastic Multi-machine algorithm for (975)Asynchronous and Decentralized matrix completion .......................................................................................... H yokun Yun, Hsiang-Fu Yu, Cho-Jui Hsieh, S V N Vishwanathan, Inderjit Dhillon Repairing Vertex Labels under Neighborhood Constraints (987)................................................................. Shaoxu Song, Hong Cheng, Jeffrey Xu Yu, Lei ChenProgressive Approach to Relational Entity Resolution (999)........................................................... Y asser Altowim, Dmitri V. Kalashnikov, Sharad MehrotraConcurrent Analytical Query Processing with GPUs (1011)...... K aibo Wang, Kai Zhang, Yuan Yuan, Siyuan Ma, Rubao Lee, Xiaoning Ding, Xiaodong ZhangVLDB 2014 ORGANIZATION AND REVIEW BOARDHonorary ChairYunhe Pan, Chinese Academy of EngineeringGeneral ChairsChun Chen, Zhejiang UniversitySharad Mehrotra, University of California, IrvineProgram Chairs and Editors-in-Chief of PVLDB 7H. V. Jagadish, University of MichiganAoying Zhou, East Normal University, ChinaResearch and Innovative Systems Tracks Associate EditorsShivnath Babu, Duke UniversityLei Chen, Hong Kong University of Science and TechnologyGraham Cormode, University of WarwickBin Cui, Peking University, ChinaWynne Hsu, NUSMartin Kersten, CWIDonald Kossman, ETHElke Rundensteiner, WPIKyuseok Shim, Seoul National UniversityWang-Chiew Tan, University of California, Santa CruzLetizia Tanca, Poli MilanoJeffrey Yu, Chinese University of Hong KongExperiments and Analysis Track Associate EditorsGao Cong, Nanyang Technology UniversityJens Dittrich, SaarlandVisions Track Associate EditorZachary Ives, University of PennsylvaniaIndustrial and Applications Track Associate EditorsUmeshwar Dayal, HPC. Mohan, IBMGe Yu, Northeastern University, ChinaDemonstration ChairsMong-Li Lee, NUSFeifei Li, University of UtahSunil Prabhakar, PurdueTutorial ChairsXiaoyong Du, Renmin UniversityMurat Kantarcioglu, University of Texas, Dallas Divesh Srivastava, AT&T Labs Workshop ChairsAnastasia Ailamaki, EPFLKaushik Chakrabarti, MicrosoftPanel ChairsHakan Hacigumus, NEC Labs Jignesh Patel, University of Wisconsin Xiaoyang Sean Wang, Fudan UniversityResearch Track Review BoardSibel Adali, Rensselear Polytechnic InstituteFoto Afrati, NTU AthensYanif Ahmad, JHUJose Luis Ambite, ISI - USCWalid Aref, Purdue UniversityClaudia Bauzer Medeiros, University of Campinas Srikanta Bedathur, IIIT DelhiMichael Benedikt, Oxford UniversitySonia Bergamaschi, Universita ModenaLaure Berti-Equille, IRD, FranceLeopoldo Bertossi, Carleton University, Ottawa Subhash Bhalla, University of Aizu, JapanPeter Boncz, CWIAngela Bonifati, University of Lille 1Rajesh Bordawekar, IBM Watson Research Center Omar Boucelma, Aix-Marseille UniversityNico Bruno, Microsoft ResearchAndrea Cali, University of London, Birkbeck College Malu Castellanos, HP LabsBadrish Chandramouli, Microsoft Research Adriane Chapman, MitreGang Chen, Zhejiang UniversityYi Chen, New Jersey Institute of Technology James Cheng, CUHKReynold Cheng, University of Hong Kong Brian Cooper, Google, USAPhilippe Cudré-Mauroux, University of Fribourg Carlo Curino, MITGautam Das, UT Arlington and QCRISudipto Das, Microsoft ResearchAnish Das SarmaAtish Das Sarma, eBay Research LabsKhuzaima Daudjee, University of Waterloo Antonios Deligiannakis, Technical University of Crete Daniel Deutch, Ben Gurion UniversityYanlei Diao, University of Massachusetts Amherst Xin (Luna) Dong, Google, USASameh Elnikety, Microsoft ResearchMohamed Eltabakh, Worcester Polytechnic Institute Ihab F. Ilyas, QCRIHakan Ferhatosmanoglu, Bilkent UniversityAda Wai-Chee Fu, Chinese University of Hong Kong Minos Garofalakis, Technical University of Crete Wolfgang Gatterbauer, Carnegie Mellon University Tingjian Ge, University of Massachussets, LowellBuğra Gedik, Bilkent UniversityRainer Gemulla, Max-Plack-Institut Saarbr點ken Gabriel Ghinita, University of Massachusetts Boston Parke Godfrey, York UniversityLukasz Golab, University of WaterlooSergio Greco, University of CalabriaLe Gruenwald, University of OklahomaGiovanna Guerrini, Universita GenovaKrishna Gummadi, MPI-SWSRahul Gupta, Google ResearchRajeev Gupta, IBM ResearchShyam Gupta, IIT DelhiMarios Hadjielefhteriou, AT&T labsWook-Shin Han, KNU, KoreaKuno Harumi, HP LabsBingsheng He, NTU SingaporeSven Helmer, Free University of Bozen-BolzanoJan Hidders, TUDelftWei Hong, Cisco System Inc.Katja Hose, Aalborg UniversityZi Huang, University of QueenslandJeong-Hyon Hwang, SUNY - AlbanySeung-won Hwang, POSTECH, KoreaStratos Idreos, CWIYoshiharu Ishikawa, Nagoya UniversityZachary Ives, University of PennsylvaniaRicardo Jimenez-Peris, Technical University of Madrid Cheqing Jin, East China Normal University Ruoming Jin, Kent State UniversityAlekh Jindal, Saarland University/MITRyan Johnson, University of TorontoDmitri V Kalashnikov, UC IrvinePanos Kalnis, KAUST, Saudi ArabiaBen Kao, Hong Kong UniversityPanagiotis Karras, Rutgers UniversityYiping Ke, Institute of High Performance Computing Bettina Kemme, McGill UniversityDaniel Kifer, PSUBenny Kimelfeld, IBMHideaki Kimura, Microsoft Jim Gray Systems Lab George Kollios, Boston UniversityChristian König, Microsoft ResearchTim Kraska, Brown University Laks V. S. Lakshmanan, University of British Columbia Mounia Lalmas, Yahoo Inc.Mong-Li Lee, National University of Singapore Wolfgang Lehner, Technische University Dresden Justin Levandoski, Microsoft ResearchChengkai Li, The University of Texas at Arlington Cuiping Li, Renmin University of ChinaFeifei Li, University of UtahGuoliang Li, Tsinghua UniversityJianzhong Li, Harbin Institute of Technology Yunyao Li, IBM AlmadenZhanhuai Li, Northwestern Polytechnical University Dan Lin, Missouri S&T, USAXuemin Lin, University of New South WalesBin Liu, NEC Labs AmericaZiyang Liu, NEC Labs AmericaEric Lo, The Hong Kong Polytechnic University Qiong Luo, HKUSTShuai Ma, Beihang UniversityAshwin Machanavajjhala, Duke UniversityBrad Malin, Duke UniversityNikos Mamoulis, University of Hong KongStefan Manegold, CWIMurali Mani, University of MichiganIoana Manolescu, INRIA, FranceAmélie Marian, Rutgers UniversityVolker Markl, TU BerlinMarta Mattoso, Federal University of Rio de Janeiro Frank McSherry, MicrosoftAlexandra Meliou, Umass AmherstMarco Mesiti, University of MilanoDan Miranker, The University of Texas at Austin Mohamed Mokbel, University of MinnesotaBongki Moon, Seoul National UniversityYasuhiko Morimoto, Hiroshima UniversityMirella Moro, Universidade Federal de Minas Gerais Kyriakos Mouratidis, SMU, SingaporeKarin Murthy, IBM IndiaArnab Nandi, Ohio State UniversityWolfgang Nejdl, University of HannoverThomas Neumann, Technology University Munchen Boris Novikov, St Petersburg UniversityDan Olteanu, Oxford UniversityGultekin Ozsoyoglu, Case Western Reserve University Tamer Ozsu, University of WaterlooEsther Pacitti, University of MontpellierThemis Palpanas, University of TrentoIppokratis Pandis, IBM AlmadenStelios Paparizos, Microsoft ResearchAditya Parameswaran, Stanford University Srinivasan Parthasarathy, The Ohio State University Jignesh Patel, University of WisconsinAndrew Pavlo, Brown UniversityPeter Pietzuch, Imperial College LondonNeoklis Polyzotis, University of California - Santa Cruz Cecilia M. Procopiuc, AT&T LabsLi Qian, University of MichiganJorge Quiané-Ruiz, QCRIElisa Quintarelli, Politecnico di MilanoMaya Ramanath, IIT DelhiLouiqa Raschid, University of MarylandVibhar Rastogi, YahooMatthias Renz, University of MunichKenneth Ross, Columbia UniversitySourav S Bhowmick, NTU, SingaporeDimitris Sacharidis, IMIS Athena, GreeceKenneth Salem, Univesity of WaterlooMaria Sapino, University of TorinoKai-Uwe Sattler, TU IlmenauMonica Scannapieco, ISTATBernhard Seeger, University of MarburgLidan Shou, Zhejiang UniversityAdam Silberstein, TrifactaLisa Singh, Georgetown UniversityRadu Sion, Stony Brook University Yufei Tao, Chinese University of Hong Kong Nesime Tatbul, ETH ZurichArash Termehchy, Oregon State University Evimaria Terzi, University of BostonMartin Theobald, Max Planck Institute, Germany Srikanta Tirthapura, Iowa State University Riccardo Torlone, Roma Tre UniversityAnthony Tung, National University of Singapore Kostas Tzoumas, Technical University of Berlin Sergei Vassilvitskii, Google ResearchMarcos Vaz Salles, University of Copenhagen (DIKU) Stratis Viglas, University of EdinburghHoang Tam Vo, National University of Singapore Daisy Zhe Wang, University of FloridaHaixun Wang, Microsoft Research AsiaKe Wang, Simon Fraser UniversityWei Wang, University of New South Wales Xiaoling Wang, East China Normal University Ingmar Weber, YahooRaymond Chi Wing Wong, HKUSTSai Wu, Zhejiang UniversityYuqing Wu, Indiana UniversityXiaokui Xiao, NTUDong Xin, GoogleJianliang Xu, Hong Kong Baptist UniversityJun (Jim) Xu, Georgia Institute of Technology Xifeng Yan, University of Santa BarbaraXiaoyan Yang, Advanced Digital Science CenterKe Yi, HKUSTGe Yu, Northeastern University, ChinaHwanjo Yu, POSTECH, KoreaMeihui Zhang, National University of Singapore Wenjie Zhang, The University of New South Wales Ying Zhang, The University of New South Wales Zhenjie Zhang, Advanced Digital Science Center Wenzhao Zhou, Georgetown UniversityXiaofang Zhou, University of QueenslandPhD Workshop ChairsErich Neuhold, University of Vienna Yunyao Li, IBM Proceedings ChairsLi Xiong, Emory University Cong Yu, Google ResearchSponsorship ChairsMike Carey, University of California, Irvine Lizhu Zhou, Tsinghua UniversityLocal Organization ChairLidan Shou, Zhejiang UniversityWeb Management ChairSai Wu, Zhejiang UniversityConference and Registration Chairs Ke Chen, Zhejiang UniversityCuiping Li, Renmin UniversityPublicity ChairsVasilis Vassalos, AUEB, GreeceDunlu Peng, USST, China Treasury ChairLi (Eric) Qian, University of MichiganVLDB Endowment LiaisonKyu-Young Whang, KAISTPVLDB Managing EditorDivesh Srivastava, AT&T LabsPVLDB Information DirectorGerald Weber, University of AucklandPVLDB Advisory CommitteePhilip Bernstein, Michael Böhlen, Peter Buneman, Susan Davidson, Z. Meral Ozsoyoglu, S. Sudarshan, Gerhard WeikumLogo DesignGuanmin GuoFIRST VLDB IN MAINLAND CHINAAs the Local Organization Chair of this year's VLDB, I would like to take this opportunity to introduce toyou the wonderful city of Hangzhou, which is to host the upcoming VLDB 2014. Located at the southern tip of the Yangtze River Delta, Hangzhou is the capital of the highly developed Zhejiang province ineastern China. Having a culture-rich history dating back to two centuries B.C., Hangzhou boasts of its influential position in ancient poetry, textile design, calligraphy, and other numerous forms of traditional Chinese arts. Besides all these man-made wonders, the city is most famous for the natural scenes around the mythical West Lake, a major tourist area surrounded by tens of attractions. While boating on the lake is a must-go tour for most, you may be more excited about a detour into the serenity of the HuPao Ancient Temple, a light hiking at dawn towards the Baochu Pagoda, or cycling at dusk on the Bai Causeway.While maintaining its historic, romantic name of "earthly paradise", Hangzhou is striving to earn the title of "IT paradise" of China today. The city is home to several major players in the Chinese ICT industry. It has also aggregated tens of thousands of young technological entrepreneurs venturing out of its High-Tech District and University Incubators.This year's VLDB is not only a fiesta for the world's database academics and industry, but also a landmark event for the Chinese database community. On behalf of the local organizing team at Zhejiang University, I would like to deliver my heartfelt thanks to all those who have helped and sponsored this event. Please join us in the picturesque city of Hangzhou!Lidan Shou, Zhejiang University, Hangzhou, Zhejiang, ChinaLocal Organization Chair, VLDB 2014。
geroscience manuscript 格式The format of a manuscript for submission to a scientific journal in the field of geroscience (the study of aging and its related processes) typically follows the guidelines provided by the specific journal to which you are submitting. Different journals may have slightly different formatting requirements, and it's crucial to adhere to the specific guidelines provided by the target journal.However, there are general guidelines that are common across many scientific journals. Here is an example of a basic manuscript format:1. Title Page:• Title: Concise and informative.• Authors: Full names, affiliations, and email addresses of all authors.• Corresponding Author: Clearly indicate who will handle correspondence at all stages of review and publication.2. Abstract:• A brief summary of the study, including objectives, methods, results, and conclusions.3. Keywords:• Provide a list of keywords that capture the main topics of the manuscript.4. Introduction:• Clearly state the research question, objectives, and the background of the study.5. Methods:• Describe the study design, participants, procedures, and statistical methods.6. Results:• Present the findings with the help of tables, figures, and text.7. Discussion:• Interpret the results, discuss their implications, and relate them to existing literature.8. Conclusion:• Summarize the main findings and suggest potential implications and areas for future research.9. Acknowledgments:• Acknowledge individuals or organizations that contributed to the research.10. References:• Cite all sources used in the manuscript following a specific citation style (e.g., APA, MLA, Chicago).11. Figures and Tables:• Include any figures and tables with appropriate captions. Ensure they are cited in the text.12. Supplementary Material:• If applicable, include any supplementary material, data, or additional information.Always check the specific guidelines of the target journal for any additional requirements, such as preferred citation style, formatting of references, figure resolutions, and specific manuscript length restrictions.When in doubt, consult the author guidelines provided by the journal or contact the editorial office for clarification. Different journals may have different requirements, and adherence to these guidelines is crucial for successful manuscript submission and peer review.。
The template is designed to help the submission of your article. For more detailed1information about Genome Biology and its submission guidelines, please go to the journal’s 2official website.34••56•7•8•910Submission notes:1112•Your manuscript must include a 'Declarations' section.13•Use double line spacing, include line and page numbering, do not use page breaks in your manuscript 14•Use SI units: Please ensure that all special characters used are embedded in the text, otherwise they 15will be lost during conversion to PDF16•The Abstract should not exceed 250 words. Please minimize the use of abbreviations and do not cite 17references in the abstract. The abstract must include the following separate sections: Background,18Results, and Conclusions.•The following word processor file formats are acceptable for the main manuscript document:1920Microsoft word (DOC, DOCX), Rich text format (RTF), TeX/LaTeX(use BioMed Central's TeX21template). If a TeX format is submitted, please include a reference PDF of the manuscript.22•Note that editable files are required for processing in production.2324Title Page25The title page should:26•present a title that includes, if appropriate, the study design, e.g.:27o"A versus B in the treatment of C: a randomized controlled trial", "X is a risk factor for Y: a28case control study", "What is the impact of factor X on subject Y: A systematic review"29o or for non-clinical or non-research studies, a description of what the article reports30•list the full names and institutional addresses for all authors31o if a collaboration group should be listed as an author, please list the Group name as an author. If 32you would like the names of the individual members of the Group to be searchable through their 33individual PubMed records, please include thi s information in the “Acknowledgements” section 34in accordance with the instructions below35•indicate the corresponding authorArticle Title3637Firstname-1 Lastname-1, Firstname-2 Lastname-2, Firstname-3 Lastname-3 & Firstname-4 Lastname-4*Abstract38Background3940Describe the context and purpose of the study.Results4142Write the main findings.Conclusions43Present a brief summary and potential implications.44Keywords45Three to ten keywords representing the main content of the article.4647Background48The Background section should explain the background to the study, its aims, a summary of the existing4950literature and why this study was necessary or its contribution to the field.51Results5253This section should include the findings of the study including, if appropriate, results of statistical54analysis which must be included either in the text or as tables (Table 1) and figures (Fig.1).For details on how to prepare figures, visit here.5556For additional information on how to prepare tables, visit here.5758Fig. 15960Comparison of single-cell and single-nucleus libraries. a Cell type composition for kidneys from Balb/c female mice.61Average percentages for scRNA-seq libraries are shown in blue and for snRNA-seq libraries in gray. BSEQ-sc62estimates are shown for bulk RNA-seq of intact and dissociated kidneys. Error bars are standard error of63mean. b Abundance of renal epithelial cell types in Clark et al. study [34] in comparison to our data from Balb/c64female mice.6566Table 1 Table title (max 15 words). Legend (max 300 words) should be included underneath the67686970Results that would otherwise be indicated as "data not shown" should be included as additional files.71BioMed Central requires that supporting data are included as additional files or deposited in a recognized 72repository. The maximum file size for additional files is recommended as 20 MB each. Each additional 73file should be cited in sequence within the main body of text. For more information on additional files, visit: here.74Discussion7576For research articles, this section should discuss the implications of the findings in the context of existing 77research and highlight limitations of the study. For study protocols and methodology manuscripts, this78section should include a discussion of any practical or operational issues involved in performing the study 79and any issues not covered in other sections.Conclusions8081This should state clearly the main conclusions and provide an explanation of the importance and82relevance of the study to the field.Methods8384The methods section should include:85•the aim, design and setting of the study,86•the characteristics of participants or description of materials,87• a clear description of all processes, interventions, and comparisons. Generic drug names should 88generally be used. When proprietary brands are used in research, include the brand names in89parentheses,90•the type of statistical analysis used, including a power calculation if appropriate.References9192Examples of the Vancouver reference style are shown here. All web links and URLs, including links to 93the authors' own websites, should be given a reference number and be included in the reference list rather 94than within the text of the manuscript.Declarations9596All manuscripts must contain the following sections under the heading 'Declarations': Ethics approval and consent to participate, Consent for publication, Availability of data and materials, Competing interests,9798Funding, Authors' contributions, Acknowledgements, and Authors' information (optional). If any of the 99sections are not relevant to your manuscript, please include the heading and write 'Not applicable' for that 100section.Ethics approval and consent to participate101102Manuscripts reporting studies involving human participants, human data or human tissue, a statementmust be included that written consent was given from all participants and also a statement that the103104experimental methods comply with the Helsinki Declaration.105•include a statement on ethics approval and consent (even where the need for approval was waived) 106•include the name of the ethics committee that approved the study and the committee’s reference 107number if appropriate108Studies involving animals must include a statement on ethics approval and for experimental studies109involving client-owned animals, authors must also include a statement on informed consent from the 110client or owner. For more information, visit here.Consent for publication111112If your manuscript contains any individual person’s data in any form (including any individual details, images or videos), consent for publication must be obtained from that person, or in the case of children, 113114their parent or legal guardian. All presentations of case reports must have consent for publication.115You can use your institutional consent form or our consent form if you prefer. You should not send the 116form to us on submission, but we may request to see a copy at any stage (including after publication). 117For more information on consent for publication, visit here.Availability of data and materials118119All manuscripts must include an ‘Availability of data and materials’ statement. Data availability120statements should include information on where data supporting the results reported in the article can be 121found including, where applicable, hyperlinks to publicly archived datasets analysed or generated during the study. We recognise it is not always possible to share research data publicly, for instance when122123individual privacy could be compromised, and in such instances data availability should still be stated in 124the manuscript along with any conditions for access. If your manuscript does not contain any data, please state 'Not applicable' in this section. For more information, visit: here125Competing interests126All financial and non-financial competing interests must be declared in this section.127128See our editorial policies for a full explanation of competing interests. If you are unsure whether you or 129any of your co-authors have a competing interest, please contact the editorial office.130Please use the authors initials to refer to each authors' competing interests in this section.131If you do not have any competing interests, please state "The authors declare that they have no competing 132interests" in this section.Funding133134All sources of funding for the research reported should be declared. The role of the funding body in the 135design of the study and collection, analysis, and interpretation of data and in writing the manuscriptshould be declared.136Author s’ Information137138You may choose to use this section to include any relevant information about the author(s) that may aid 139the reader's interpretation of the article and understand the standpoint of the author(s). This may include 140details about the authors' qualifications, current positions they hold at institutions or societies, or any141other relevant background information. Please refer to authors using their initials. Note this section should not be used to describe any competing interests.142143Affiliations144School of Engineering and Material Science, Queen Mary University of London, London, UK145146FirstnameInitials-1 Lastname-1, FirstnameInitials-2 Lastname-2 & FirstnameInitials-3 Lastname-3147Process & Energy Department, Faculty of Mechanical, Maritime and Materials Engineering, Delft 148University of Technology, Delft, The Netherlands149FirstnameInitials-4 Lastname-4Contributions150151The individual contributions of authors to the manuscript should be specified in this section. Please use 152initials to refer to each author's contribution in this section, for example: "FC analyzed and interpreted the 153patient data regarding the hematological disease and the transplant. RH performed the histological154examination of the kidney and was a major contributor in writing the manuscript. All authors read and 155approved the final manuscript."Corresponding author156157Correspondence to: Firstname-4 Lastname-4Acknowledgements158Please acknowledge anyone who contributed towards the article who does not meet the criteria for159160authorship including anyone who provided professional writing services or materials. Authors should 161obtain permission to acknowledge from all those mentioned in the Acknowledgements section. For more 162information, visit here.163164You may also wish to include a section here about our Additional file requirements (supplementary165information). Supplemental figures should be combined into one file, with legends included.166Supplementary tables should be submitted as individual Excel files (not tabs on the same file), or in a .doc 167format. There should be a section at the end of the Declarations listing the additional files with a short 168description of each, and they should be referred to throughout the text in the format (Additional file 1: Fig 169S1).。
The VLDB Journal(2003)/Digital Object Identifier(DOI)10.1007/s00778-003-0095-zAurora:a new model and architecture for data stream management Daniel J.Abadi1,Don Carney2,Uˇg ur C¸etintemel2,Mitch Cherniack1,Christian Convey2,Sangdon Lee2, Michael Stonebraker3,Nesime Tatbul2,Stan Zdonik21Department of Computer Science,Brandeis University,Waltham,MA02254e-mail:{dna,mfc}@2Department of Computer Science,Brown University,Providence,RI02912e-mail:{dpc,ugur,cjc,sdlee,tatbul,sbz}@3Department of EECS and Laboratory of Computer Science,M.I.T.,Cambridge,MA02139e-mail:stonebreaker@Edited by♣.Received:♣/Accepted:♣Published online:♣2003–c Springer-Verlag2003Abstract.This paper describes the basic processing model and architecture of Aurora,a new system to manage data streams for monitoring applications.Monitoring applications differ substantially from conventional business data process-ing.The fact that a software system must process and react to continual inputs from many sources(e.g.,sensors)rather than from human operators requires one to rethink the funda-mental architecture of a DBMS for this application area.In this paper,we present Aurora,a new DBMS currently under construction at Brandeis University,Brown University,and M.I.T.Wefirst provide an overview of the basic Aurora model and architecture and then describe in detail a stream-oriented set of operators.Keywords:Ddata stream management–Continuous queries –Database triggers–Real-time systems–Quality-of-service 1IntroductionTraditional DBMSs have been oriented toward business data processing,and consequently are designed to address the needs of these applications.First,they have assumed that the DBMS is a passive repository storing a large collection of data elements and that humans initiate queries and transac-tions on this repository.We call this a Human-Active,DBMS-Passive(HADP)model.Second,they have assumed that the current state of the data is the only thing that is important. Hence,current values of data elements are easy to obtain, while previous values can only be found torturously by de-coding the DBMS log.The third assumption is that triggers and alerters are second-class citizens.These constructs have been added as an afterthought to current systems,and none has an implementation that scales to a large number of trig-gers.Fourth,DBMSs assume that data elements are synchro-nized and that queries have exact answers.In many stream-oriented applications,data arrive asynchronously and an-swers must be computed with incomplete stly,DBMSs assume that applications require no real-time ser-vices.There is a substantial class of applications where allfive assumptions are problematic.Monitoring applications are ap-plications that monitor continuous streams of data.This class of applications includes military applications that monitor readings from sensors worn by soldiers(e.g.,blood pressure, heart rate,position),financial analysis applications that mon-itor streams of stock data reported from various stock ex-changes,and tracking applications that monitor the locations of large numbers of objects for which they are responsible (e.g.,audio-visual departments that must monitor the loca-tion of borrowed equipment).Because of the high volume of monitored data and the query requirements for these appli-cations,monitoring applications would benefit from DBMS support.Existing DBMS systems,however,are ill suited for such applications since they target business applications.First,monitoring applications get their data from exter-nal sources(e.g.,sensors)rather than from humans issuing transactions.The role of the DBMS in this context is to alert humans when abnormal activity is detected.This is a DBMS-Active,Human-Passive(DAHP)model.Second,monitoring applications require data manage-ment that extends over some history of values reported in a stream and not just over the most recently reported values. Consider a monitoring application that tracks the location of items of interest,such as overhead transparency projectors and laptop computers,using electronic property stickers at-tached to the objects.Ceiling-mounted sensors inside a build-ing and the GPS system in the open air generate large volumes of location data.If a reserved overhead projector is not in its proper location,then one might want to know the geographic position of the missing projector.In this case,the last value of the monitored object is required.However,an administra-tor might also want to know the duty cycle of the projector, thereby requiring access to the entire historical time series.Third,most monitoring applications are trigger-oriented. If one is monitoring a chemical plant,then one wants to alert an operator if a sensor value gets too high or if another sensor value has recorded a value out of range more than twice in thelast24h.Every application could potentially monitor mul-tiple streams of data,requesting alerts if complicated condi-tions are met.Thus,the scale of trigger processing required in this environment far exceeds that found in traditional DBMS applications.Fourth,stream data are often lost,stale,or intentionally omitted for processing reasons.An object being monitored may move out of range of a sensor system,thereby resulting in lost data.The most recent report on the location of the ob-ject becomes more and more inaccurate over time.Moreover, in managing data streams with high input rates,it might be necessary to shed load by dropping less important input data. All of this,by necessity,leads to approximate answers.Lastly,many monitoring applications have real-time re-quirements.Applications that monitor mobile sensors(e.g., military applications monitoring soldier locations)often have a low tolerance for stale data,making these applications effectively real time.The added stress on a DBMS that must serve real-time applications makes it imperative that the DBMS employ intelligent resource management(e.g., scheduling)and graceful degradation strategies(e.g.,load shedding)during periods of high load.We expect that appli-cations will supply Quality of Service(QoS)specifications that will be used by the running system to make these dy-namic resource allocation decisions.Monitoring applications are very difficult to implement in traditional DBMSs.First,the basic computation model is wrong:DBMSs have a HADP model while monitoring ap-plications often require a DAHP model.In addition,to store time-series information one has only two choices.First,he can encode the time series as current data in normal tables. In this case,assembling the historical time series is very ex-pensive because the required data is spread over many tu-ples,thereby dramatically slowing performance.Alternately, he can encode time series information in binary large ob-jects to achieve physical locality,at the expense of making queries to individual values in the time series very difficult. One system that tries to do something more intelligent with time series data is the Informix Universal Server,which im-plemented a time-series data type and associated methods that speed retrieval of values in a time series[1];however,this system does not address the concerns raised above.If a monitoring application had a very large number of triggers or alerters,then current DBMSs would fail because they do not scale past a few triggers per table.The only al-ternative is to encode triggers in some middleware ing this implementation,the system cannot reason about the triggers(e.g.,optimization),because they are out-side the DBMS.Moreover,performance is typically poor be-cause middleware must poll for data values that triggers and alerters depend on.Lastly,no DBMS that we are aware of has built-in facil-ities for approximate query answering.The same comment applies to real-time capabilities.Again,the user must build custom code into his application.For these reasons,monitoring applications are difficult to implement using traditional DBMS technology.To do bet-ter,all the basic mechanisms in current DBMSs must be rethought.In this paper,we describe a prototype system,Au-rora,which is designed to better support monitoring appli-figure=e:/vldb/095/slide1.eps,width=Fig.1.Aurora system modelcations.We use Aurora to illustrate design issues that would arise in any system of this kind.Monitoring applications are applications for which streams of information,triggers,imprecise data,and real-time requirements are prevalent.We expect that there will be a large class of such applications.For example,we expect the class of monitoring applications for physical facilities(e.g., monitoring unusual events at nuclear power plants)to grow in response to growing needs for security.In addition,as GPS-style devices are attached to an ever broader class of objects, monitoring applications will expand in scope.Currently such monitoring is expensive and restricted to costly items like au-tomobiles(e.g.,Lojack technology[2]).In the future,it will be available for most objects whose position is of interest.In Sect.2,we begin by describing the basic Aurora archi-tecture and fundamental building blocks.In Sect.3,we show why traditional query optimization fails in our environment and present our alternate strategies for optimizing Aurora ap-plications.Section4describes the run-time architecture and behavior of Aurora,concentrating on storage organization, scheduling,introspection,and load shedding.In Sect.5,we describe Aurora’s data stream operators in detail.In Sect.6, we discuss the myriad of related work that has preceded our effort.We describe the status of our prototype implementa-tion in Sect.7and conclude in Sect.8.2Aurora system modelAurora data are assumed to come from a variety of data sources such as computer programs that generate values at regular or irregular intervals or hardware sensors.We will use the term data source for either case.In addition,a data stream is the term we will use for the collection of data val-ues presented by a data source.Each data source is assumed to have a unique source identifier,and Aurora timestamps ev-ery incoming tuple to monitor the quality of service being provided.The basic job of Aurora is to process incoming streams in the way defined by an application administrator.Aurora is fundamentally a data-flow system and uses the popular boxes and arrows paradigm found in most processflow and work-flow systems.Hence,tuplesflow through a loop-free,directed graph of processing operations(i.e.,boxes).Ultimately,out-put streams are presented to applications,which must be pro-grammed to deal with the asynchronous tuples in an output stream.Aurora can also maintain historical storage,primar-ily in order to support ad hoc queries.Figure1illustrates the high-level system model.Aurora’s query algebra(SQuAl1)contains built-in sup-port for seven primitive operations for expressing its stream processing requirements.Many of these have analogs in the relational query operators.For example,we support afilter operator that,like the relational operator select,applies any number of predicates to each incoming tuple,routing the tu-1SQuAl is short for[S]tream[Qu]ery[Al]gebra.figure=e:/vldb/095/slide2.eps Fig.2.Aurora query modelples according to which predicates they satisfy.Another op-erator,(Aggregate),computes stream aggregates in a way that addresses the fundamental push-based nature of streams,ap-plying a function across a window of values in a stream(e.g., a moving average).In environments where data can be stale or time imprecise,windowed operations are a necessity.There is no explicit split box;instead,the application ad-ministrator can connect the output of one box to the input of several others.This implements an implicit split operation. On the other hand,there is an explicit Aurora Union oper-ation,whereby two streams can be put together.If,addition-ally,one tuple must be delayed for the arrival of a second one, then a Resample box can be inserted in the Aurora network to accomplish this effect.Arcs in an Aurora diagram actually represent a collec-tion of streams with common schema.The actual number of streams on an arc is unspecified,making it easy to have streams appear and disappear without modification to the Au-rora network.2.1Query modelAurora supports continuous queries(real-time processing), views,and ad hoc queries all using substantially the same mechanisms.All three modes of operation use the same con-ceptual building blocks.Each mode processesflows based on QoS specifications–each output in Aurora is associated with two-dimensional QoS graphs that specify the utility of the output in terms of several performance-related and quality-related attributes(see Sect.4.1).The diagram in Fig.2illus-trates the processing modes supported by Aurora.The topmost path represents a continuous query.In isola-tion,data elementsflow into boxes,are processed,andflow further downstream.In this scenario,there is no need to store any data elements once they are processed.Once an input has worked its way through all reachable paths,that data item is drained from the network.The QoS specification at the end of the path controls how resources are allocated to the process-ing elements along the path.One can also view an Aurora network(along with some of its applications)as a large col-lection of triggers.Each path from a sensor input to an output can be viewed as computing the condition part of a complex trigger.An output tuple is delivered to an application,which can take the appropriate action.The dark circles on the input arcs to boxes b1and b2repre-sent connection points.A connection point is an arc that sup-ports dynamic modification to the network.New boxes can be added to or deleted from a connection point.When a new application connects to the network,it will often require ac-cess to the recent past.As such,a connection point has the potential for persistent storage(see Sect.4.2).Persistent stor-age retains data items beyond their processing by a particular box.In other words,as itemsflow past a connection point, they are cached in a persistent store for some period of time. They are not drained from the network by applications.In-stead,a persistence specification indicates exactly how long the items are kept,so that a future ad hoc query can get his-torical results.In thefigure,the leftmost connection point is specified to be available for2h.This indicates that the begin-ning of time for newly connected applications will be2h in the past.Connection points can be generalized to allow an elegant way of including static data sets in Aurora.Hence we allow a connection point to have no upstream node,i.e.,a dangling connection point.Without an upstream node the connection point cannot correspond to an Aurora stream.Instead,the connection point is decorated with the identity of a stored data set in a traditional DBMS or other storage system.In this case,the connection point can be materialized and the stored tuples passed as a stream to the downstream node.In this case,such tuples will be pushed through an Aurora net-work.Alternately,query execution on the downstream node can pull tuples by running a query to the store.If the down-stream node is afilter or a join,pull processing has obvious advantages.Moreover,if the node is a join between a stream and a stored data set,then an obvious query execution strat-egy is to perform iterative substitution whenever a tuple from the stream arrives and perform a lookup to the stored data.In this case,a window does not need to be specified as the entire join can be calculated.The middle path in Fig.2represents a view.In this case,a path is defined with no connected application.It is allowed to have a QoS specification as an indication of the importance of the view.Applications can connect to the end of this path whenever there is a need.Before this happens,the system can propagate some,all,or none of the values stored at the con-nection point in order to reduce latency for applications that connect later.Moreover,it can store these partial results at any point along a view path.This is analogous to a materi-alized or partially materialized view.View materialization is under the control of the scheduler.The bottom path represents an ad hoc query.An ad hoc query can be attached to a connection point at any time.The semantics of an ad hoc query is that the system will process data items and deliver answers from the earliest time T(per-sistence specification)stored in the connection point until the query branch is explicitly disconnected.Thus,the semantics for an Aurora ad hoc query is the same as a continuous query that starts executing at t now−T and continues until explicit termination.2.2Graphical user interfaceThe Aurora user interface cannot be covered in detail because of space limitations.Here,we mention only a few salient features.To facilitate designing large networks,Aurora will support a hierarchical collection of groups of boxes.A de-signer can begin near the top of the hierarchy where only a few superboxes are visible on the screen.A zoom capability is provided to allow him to move into specific portions of the network,by replacing a group with its constituent boxes and groups.In this way,a browsing capability is provided for the Aurora diagram.Boxes and groups have a tag,an argument list,a de-scription of the Functionality,and,ultimately,a manual page. Users can teleport to specific places in an Aurora network by querying these attributes.Additionally,a user can place book-marks in a network to allow him to return to places of interest.These capabilities give an Aurora user a mechanism to query the Aurora diagram.The user interface also allows monitors for arcs in the network to facilitate debugging as well as facilities for“single stepping”through a sequence of Aurora boxes.We plan a graphical performance monitor as well as more sophisticated query capabilities.3Aurora optimizationIn traditional relational query optimization,one of the pri-mary objectives is to minimize the number of iterations over large data sets.Stream-oriented operators that constitute the Aurora network,on the other hand,are designed to operate in a dataflow mode in which data elements are processed as they appear on the input.Although the amount of computa-tion required by an operator to process a new element is usu-ally quite small,we expect to have a large number of boxes. Furthermore,high data rates add another dimension to the stly,we expect many changes to be made to an Aurora network over time,and it seems unreasonable to take the network offline to perform a compile time optimization. We now present our strategies to optimize an Aurora network.3.1Dynamic continuous query optimizationWe begin execution of an unoptimized Aurora network,i.e., the one that the user constructed.During execution we gather run-time statistics such as the average cost of box execution and box selectivity.Our goal is to perform run-time optimiza-tion of a network,without having to quiesce it.Hence com-bining all the boxes into a massive query and then applying conventional query optimization is not a workable approach. Besides being NP-complete[26],it would require quiescing the whole network.Instead,the optimizer will select a por-tion of the network for optimization.Then it willfind all connection points that surround the subnetwork to be opti-mized.It will hold all input messages at upstream connec-tion points and drain the subnetwork of messages through all downstream connection points.The optimizer will then apply the following local tactics to the identified subnetwork.•Inserting projections.It is unlikely that the application ad-ministrator will have inserted map operators(see Sect.5) to project out all unneeded attributes.Examination of an Aurora network allows us to insert or move such map operations to the earliest possible points in the network, thereby shrinking the size of the tuples that must be sub-sequently processed.Note that this kind of optimization requires that the system be provided with operator sig-natures that describe the attributes that are used and pro-duced by the operators.•Combining boxes.As a next step,Aurora diagrams will be processed to combine boxes where possible.A pair-wise examination of the operators suggests that,in gen-eral,map andfilter can be combined with almost all of the operators,whereas windowed or binary operators cannot.It is desirable to combine two boxes into a single box when this leads to some cost reduction.As an example,a map operator that only projects out attributes can becombined easily with any adjacent operator,thereby sav-ing the box-execution overhead for a very cheap opera-tor.In addition,twofiltering operations can be combined into a single,more complexfilter that can be more effi-ciently executed than the two boxes it replaces.Not only is the overhead of a second box activation avoided,but also standard relational optimization on one-table predi-cates can be applied in the larger box.In general,combin-ing boxes at least saves the box-execution overhead and reduces the total number of boxes,leading to a simpler diagram.•Reordering boxes.Reordering the operations in a conven-tional relational DBMS to an equivalent but more efficient form is a common technique in query optimization.For example,filter operations can sometimes be pushed down the query tree through joins.In Aurora,we can apply the same technique when two operations commute.To decide when to interchange two commutative opera-tors,we make use of the following performance model.Each Aurora box,b,has a cost,c(b),defined as the expected execu-tion time for b to process one input tuple.Additionally,each box has a selectivity,s(b),which is the expected number of output tuples per input tuple.Consider two boxes,b i and b j, with b j following b i.In this case,for each input tuple for b i we can compute the amount of processing as c(b i)+c(b j)×s(b i). Reversing the operators gives a like calculation.Hence we can compute the condition used to decide whether the boxes should be switched as:(1−s(b j))/c(b j)<(1−s(b i))/c(b i))It is straightforward to generalize the above calculation to deal with cases that involve fan-in or fan-out situations.More-over,it is easy to see that we can obtain an optimal ordering by sorting all the boxes according to their corresponding ra-tios in decreasing order.We use this result in a heuristic algo-rithm that iteratively reorders boxes(to the extent allowed by their commutativity properties)until no more reorderings are possible.When the optimizer has found all productive transforma-tions using the above tactics,it constructs a new subnetwork, binds it into the composite Aurora network that is running, and then instructs the scheduler to stop holding messages at the input connection points.Of course,outputs affected by the subnetwork will see a blip in response time;however,the remainder of the network can proceed unimpeded.An Aurora network is broken naturally into a collection of k subnetworks by the connection points that are inserted by the application administrator.Each of these subnetworks can be optimized individually,because it is a violation of Aurora semantics to optimize across a connection point.The Aurora optimizer is expected to cycle periodically through all k sub-networks and run as a background task.3.2Ad hoc query optimizationOne last issue that must be dealt with is ad hoc query op-timization.Recall that the semantics of an ad hoc query isfigure=e:/vldb/095/slide3.eps,width=Fig.3.Aurora run-time architecturethat it must run on all the historical information saved at the connection points to which it is connected.Subsequently it becomes a normal portion of an Aurora network until it is discarded.Aurora processes ad hoc queries in two steps by constructing two separate subnetworks.Each is attached to a connection point,so the optimizer can be run before the scheduler lets messagesflow through the newly added sub-networks.Aurora semantics require the historical subnetwork to be runfirst.Since historical information is organized as a B-tree, the Aurora optimizer begins at each connection point and ex-amines the successor box(es).If the box is afilter,then Au-rora examines the condition to see if it is compatible with the storage key associated with the connection point.If so,it switches the implementation of thefilter box to perform an in-dexed lookup in the B-tree.Similarly,if the successor box is a join,then the Aurora optimizer costs performing a merge-sort or indexed lookup,chooses the cheapest one,and changes the join implementation appropriately.Other boxes cannot effec-tively use the indexed structure,so only these two need be considered.Moreover,once the initial box performs its work on the historical tuples,the index structure is lost,and all sub-sequent boxes will work in the normal way.Hence,the opti-mizer converts the historical subnetwork into an optimized one,which is then executed.When it isfinished,the subnetwork used for continuing operation can be run to produce subsequent output.Since this is merely one of the subnetworks,it can be optimized in the normal way suggested above.In summary,the initial boxes in an ad hoc query can pull information from the B-tree associated with the correspond-ing connection points.When the historical operation isfin-ished,Aurora switches the implementation to the standard push-based data structures and continues processing in the conventional fashion.4Run-time operationThe basic purpose of an Aurora run-time network is to pro-cess dataflows through a potentially large workflow diagram. Figure3illustrates the basic Aurora architecture.Here,in-puts from data sources and outputs from boxes are fed to the router,which forwards them either to external applications or to the storage manager to be placed on the proper queue. The storage manager is responsible for maintaining the box queues and managing the buffer.Conceptually,the scheduler picks a box for execution,ascertains what processing is re-quired,and passes a pointer to the box description(together with a pointer to the box state)to the multithreaded box pro-cessor.The box processor executes the appropriate operation and then forwards the output tuples to the router.The sched-uler then ascertains the next processing step and the cycle repeats.The QoS monitor continually monitors system per-formance and activates the load shedder when it detects an overload situation and poor system performance.The load shedder then sheds load till the performance of the system reaches an acceptable level.The catalog in Fig.3contains information regarding the network topology,inputs,outputs, QoS information,and relevant statistics(e.g.,selectivity,av-erage box processing costs)and is essentially used by all components.We now describe Aurora’s primary run-time architecture in more detail,focusing primarily on the storage manager, scheduler,QoS monitor,and load shedder.4.1QoS data structuresAurora attempts to maximize the perceived QoS for the out-puts it produces.QoS,in general,is a multidimensional func-tion of several attributes of an Aurora system.These include:•Response times–output tuples should be produced in a timely fashion,as otherwise QoS will degrade as delays get longer.•Tuple drops–if tuples are dropped to shed load,then the QoS of the affected outputs will deteriorate.•Values produced–QoS clearly depends on whether or not important values are being produced.Asking the application administrator to specify a multidi-mensional QoS function seems impractical.Instead,Aurora relies on a simpler tactic that is much easier for humans to deal with:for each output stream,we expect the application administrator to give Aurora a two-dimensional QoS graph based on the processing delay of output tuples produced(as illustrated in Fig.4a).Here,the QoS of the output is maxi-mized if delay is less than the threshold,δ,in the graph.Be-yondδ,QoS degrades with additional delay.Optionally,the application administrator can give Aurora two additional QoS graphs for all outputs in an Aurora sys-tem.Thefirst,illustrated in Fig.4b,shows the percentage of tuples delivered.In this case,the application administra-tor indicates that high QoS is achieved when tuple delivery is near100%and that QoS degrades as tuples are dropped.The second optional QoS graph for outputs is shown in Fig.4c. The possible values produced as outputs appear on the hor-izontal axis,and the QoS graph indicates the importance of each one.This value-based QoS graph captures the fact that some outputs are more important than others.For example, in a plant monitoring application,outputs near a critical re-gion are much more important than ones well away from it. Again,if the application administrator has value-based QoS information,then Aurora will use it to shed load more intelli-gently than would occur otherwise.Aurora makes several assumptions about the QoS graphs. First,it assumes that all QoS graphs are normalized so that QoS for different outputs can be quantitatively compared. Second,Aurora assumes that the value chosen forδis fea-sible,i.e.,that a properly sized Aurora network will operate with all outputs in the good zone to the left ofδin steady state.This will require the delay introduced by the total com-putational cost along the longest path from a data source to this output not to exceedδ.If the application administrator does not present Aurora with feasible QoS graphs,then the algorithms in the subsequent sections may not produce good results.Third,unless otherwise stated,Aurora assumes that。
mdpi submission system"MDPI Submission System: How to Successfully Submit Your Manuscript for Publication"Introduction:MDPI (Multidisciplinary Digital Publishing Institute) is a leading open-access publisher that offers a wide range of scientific journals covering different areas of research. The MDPI submission system is a user-friendly online platform designed to facilitate the submission and peer-review process. In this article, we will provide a step-by-step guide on how to effectively navigate the MDPI submission system and successfully submit your manuscript for publication.1. Familiarize Yourself with MDPI Journals:Before submitting your manuscript, it is essential to carefully research and select the appropriate MDPI journal that aligns with your research field. MDPI offers numerous journals across various disciplines, each with its own scope and requirements. Start by exploring the MDPI website and examining the aims and scope of different journals to ensure a good fit for your manuscript.2. Prepare Your Manuscript:Once you have selected the suitable journal, it is crucial to thoroughly review and adhere to the specific guidelines outlined by MDPI. These guidelines usually include formatting, word count limits, citation style, and ethical considerations. Take the time to ensure your manuscript is well-written, properly structured, and follows the journal's specific instructions.3. Create an Account:Before submitting your manuscript, you will need to create an account on the MDPI submission system. This account will allow you to track the status of your manuscript throughout the review process and make any necessary revisions.4. Start the Submission Process:Once you have logged into the MDPI submission system, locate the "Submit Manuscript" option. This will initiate the submission process. You will be prompted to select the journal you wish to submit to and provide basic information about your manuscript, such as title, abstract, and keywords.5. Upload Your Manuscript:After entering the required details, you will be prompted to upload your manuscript file. MDPI's submission system supports various file formats, such as Microsoft Word and PDF. Make sure to follow the journal's guidelines regarding file format and size limitations.6. Provide Additional Information:In addition to your manuscript, you may be required to provide additional information during the submission process. This can include author details, affiliations, funding information, and conflicts of interest. Be thorough and accurate when providing this information, as it helps to establish transparency and credibility.7. Submit and Confirm:After completing all the necessary steps, carefully review your submission for any errors or missing information. Once satisfied, click the "Submit" button to finalize the submission. MDPI's submission system will generate a confirmation email to acknowledge receipt of your manuscript.8. Track the Review Process:After submission, you will have access to a personalized dashboard within the MDPI submission system. This dashboard allows you totrack the progress of your manuscript's review, including any updates or requests from reviewers. It is important to regularly check your dashboard and respond promptly to any actions or revisions required.9. Revise, Resubmit, or Accept:During the review process, you may receive comments or suggestions from the peer reviewers. Take these criticisms constructively and make the necessary revisions to improve the quality of your manuscript. If major revisions are required, you may need to resubmit your revised manuscript through the MDPI submission system. Once the revisions are accepted, your manuscript will be ready for publication.Conclusion:Submitting your manuscript to MDPI through their user-friendly submission system involves a step-by-step process to ensure a smooth and successful publication. By familiarizing yourself with MDPI journals, carefully preparing your manuscript, creating an account, following the submission guidelines, and actively tracking the review process, you can maximize your chances of getting yourresearch published in a reputable scientific journal.。
resubmitted manuscript received
"Resubmitted manuscript received" 表示重新提交的手稿已经收到。
这通常是在学术出版或其他类似的情境中使用的术语。
当作者对之前提交的手稿进行了修改或完善后,他们可能会将其重新提交给相关的机构或审稿人。
收到重新提交的手稿后,接下来的步骤可能包括进一步的审查、评估或处理。
这可能涉及审稿人对修改后的手稿进行再次评审,或者编辑团队对其进行处理,以确定是否满足出版的要求。
这个阶段对于作者来说是一个重要的节点,因为它意味着他们的努力和改进已经被接收并进入下一个阶段。
作者通常会期待收到关于手稿的反馈,以便了解是否还需要进一步的修改或是否可以进入后续的出版流程。
收到"Resubmitted manuscript received"的通知后,作者可能会感到一定的紧张和期待,同时也会希望自己的努力能够得到积极的回应。
在这个过程中,耐心等待反馈和与相关团队或审稿人的沟通是很重要的。
总的来说,"Resubmitted manuscript received"是手稿提交过程中的一个重要状态,表示修改后的手稿已经被接收,并且正在进行下一步的处理。
希望以上解释对你有所帮助!如果你还有其他问题或需要进一步的讨论,请随时告诉我。
scholarone manuscript中的unavailabledatesScholarOne Manuscript是一种学术稿件管理系统,广泛用于学术期刊的投稿、审稿、编辑和出版流程。
在这个系统中,有一项重要的功能叫做“不可用日期”(Unavailable Dates)。
本文将深入解析ScholarOne Manuscript中的“不可用日期”功能,以帮助作者、审稿人和编辑更好地利用这一特性,提高稿件处理的效率。
一、什么是“不可用日期”功能:在ScholarOne Manuscript中,“不可用日期”是指作者、审稿人或编辑在一定时间内不方便处理或参与稿件管理的日期。
这可以是因为个人原因、假期、会议参与等造成的时间冲突。
通过设置不可用日期,系统会在相应的时间段内避免分配稿件或审稿任务,以确保稿件管理流程的顺畅。
二、设置“不可用日期”的步骤:1.作者设置:在投稿过程中,作者可以在系统中指定一些时间段为不可用日期。
这样,编辑在安排审稿人时,会避开这些日期,以充分考虑作者的时间安排。
2.审稿人设置:同样地,审稿人也可以在系统中设置不可用日期。
这有助于确保审稿人在处理稿件时处于最佳状态,同时避免因时间冲突而延误审稿进程。
3.编辑设置:编辑作为整个稿件管理流程的中心人物,同样可以设定不可用日期。
这有助于编辑在规划整个流程时更好地考虑自身的工作负荷和其他时间安排。
三、使用“不可用日期”功能的优势:1.提高效率:通过合理设置不可用日期,可以避免在作者、审稿人或编辑不方便处理稿件时分配相关任务,从而提高整个稿件处理流程的效率。
2.考虑个体差异:不同的人在不同的时间段内效率和工作状态可能存在差异。
使用不可用日期功能可以更好地考虑个体差异,使整个系统更灵活和人性化。
3.减少时间冲突:在繁忙的学术和编辑日程中,时间冲突是难以避免的。
合理使用不可用日期功能可以有效减少因时间冲突而引起的问题。
四、潜在的注意事项:1.提前规划:为了更好地利用不可用日期功能,用户需要提前规划自己的工作和休假时间,确保系统能够准确地避开不可用日期。
菌物学报endnote格式
菌物学报(Journal of Mycology)是一本涵盖真菌学领域的期刊,它通常使用EndNote格式来引用参考文献。
EndNote是一种广
泛使用的参考文献管理软件,它可以帮助作者轻松地管理和引用文献。
在EndNote中,菌物学报的引用格式通常遵循国际通用的科技
期刊引文格式,包括作者、文章标题、期刊名称、卷号、期号、页
码和发表年份等信息。
在使用EndNote时,可以选择菌物学报的引
文样式,然后在文献库中添加相应的参考文献信息,EndNote会自
动根据所选样式生成符合菌物学报要求的引文格式。
值得注意的是,不同的期刊可能会有不同的EndNote引文格式,因此在引用菌物学报的文献时,建议在EndNote中选择菌物学报的
官方引文样式,以确保引文格式的准确性和规范性。
总之,菌物学报的EndNote格式是根据国际通用的科技期刊引
文格式制定的,使用EndNote软件可以方便地生成符合菌物学报要
求的参考文献引文格式。
微生物学报参考文献格式
1、要求:
①文献必须是作者在论文中直接引用的,最主要的,发表在正式出版物上的文献。
②未正式发表的文献(如会议论文集、私人通讯等,毕业论文除外)一般不作为文献引用,必要时可作为脚注处理。
③文献应以在文中出现的先后顺序排序,论文一般不超过20篇,综述不超过25篇。
2、格式:请严格按照下列格式整理文献。
①列出参考文献中的全部作者。
【2010年开始实行】
②姓名采用姓前名后的形式,姓写全称,名缩写(不加缩写点,双名间不空格)。
作者之间用逗号隔开,不用“和”字或“and”。
③外国期刊名应用全拼、不可缩写【2009年开始实行】,西文刊名采用斜体,且每个实词的首字母要大写。
④非英文的期刊,以尊重原始文字为主,在原文献后标注对应的英文对照。
3、具体模式:
①期刊:[序号]作者.文章题目.刊名,年,卷(期):起止页码.
②图书:[序号]作者.书名.版次.出版地:出版社,出版年:起止页码.
[序号]文章作者.文章题目//书的作者.书名.版次.出版地:出版社,出版年:起止页码.
③译著:[序号]外国作者的原姓名.中文书名.译者的中国人名,等译.版次.出版地:出版社,出版年:起止页码.
④专利:[序号]专利所有者.专利题名.专利国别:专利号.日期.
⑤论文:[序号]作者.论文题目.“单位”的“学位论文”,年.
⑥互联网:写明网址及浏览日期。
基于VLDB和SIGMOD论文统计的数据库研究热点预测林泳201021030117,甄真201021029968(华南理工大学软件学院,广东广州510006)摘要:当今世界上,在数据库领域有几个最具权威性的国际学术会议,比如vldb、sigmod、icde等,它们为各国的数据库研究者、开发者和应用者提供一个讨论和交流的平台,引导和促进数据库学科的发展。
我们可以从研究人员在这些会议上发表的论文看出当今数据库技术的热点,发现相关课题的未来的发展趋势。
首先,我们分析了vldb和sigmod会议在2008年、2009年和2010年所接受的关于核心数据库技术的论文。
我们还引用了《Observation on Database Research Trends via Publication Statistics》中对这两个会议从1997年到2006年进行统计的数据。
我们把论文归为28个类中,其中的22个类是基于传统的数据库技术提出的,其他6个类是我们在分析的过程中加入的新类别。
然后,我们预测了六个新类别将断续走红并成为未来的新热点,这也是本文的一大特色所在。
最后,我们分析出最近几年数据库技术热点的变化趋势,并推测出未来数据库技术的发展趋势。
关键词:数据库;研究热点;趋势Prediction of database research focus based on statistical data of VLDB and SIGMOD papersZhen Zhen, Lin Y ong(School of Software Engineering, South China University of Technology, Guangzhou 510006, China)Abstract: Today, there are several authoritative international academic conferences in the areaof database, such as VLDB, SIGMOD, and ICDE. They provide a platform for researchers,developers, and users of database from every country to discuss and exchange their minds,guiding and promoting the development of database. We can see the research focus of today’sdatabase technology and the trend of related topics in the future from researchers’paperspublished in these conferences. First, We analyzed papers about core database technology,accepted by VLDB and SIGMOD in 2008, 2009 and 2010. We also referred to statistical datafrom 1997 to 2006 in Observation on Database Research Trends via Publication Statistics. W eclassified these papers to 28 categories. 22 categories are classified based on traditionaldatabase technology, and the others are new categories we added during the analyzing. Andthen we predicte these 6 new categories will be more and more porpular and become hot spotsin the future, which is the main feature of this paper. Finally, we analyze the trend of databaseresearch focus in recent years, and predict the trend in the future.Keywords: database; research focus; trend目录1引言 (3)2 介绍 (3)3 统计数据及总体分析 (3)4 具体分析 (8)4.1传统升温热点 (8)4.2传统稳定热点 (9)4.3传统降温热点 (11)4.4新兴热点 (13)4.5往年热门但现在不热门 (16)5 结论 (17)参考文献 (17)1引言数据库技术从20世纪60年代中期产生至今,经历了人工与文件管理阶段、层次与网状数据库系统阶段、关系数据库系统阶段和新一代数据库阶段。
Please Select an Article TypeSelecting an Article Type is Required for Submission.To submit your manuscript to this journal, you need to complete all submission steps and approve the PDF that the system creates. Please note that submissions that have not been completed will be removed after 90 days. [More]Please choose the Article Type of your manuscript from the drop-down menu. The Guide for Authors specifies the journal's requirements; a link to this is on the banner above, and will be available throughout the submission process.You may also view the submission-to-publication lifecycle.Choose Article TypePlease Enter The Full Title of Your SubmissionEntering a Full Title is Required for Submission.Please enter only the title of your manuscript below. Additional comments may be entered at a later step.Full TitlePlease Enter the FollowingPlease enter the details of all authors (other than you) who contributed to the work reported in your manuscript. After entering each author's details, click Add Author.[More]You may view the interactive tutorial explaining this step.By beginning the manuscript submission process, YOU are automatically designated as the Corresponding Author.You may view the interactive tutorial explaining how to change the Corresponding Author of your manuscript.First Name*Middle InitialLast Name*Academic Degree(s)AffiliationE-mail AddressPlease select if this is the corresponding authorSuggest ReviewersSuggesting 8 reviewer(s) is Required for Submission.Please suggest potential reviewers for this submission.Please read the requirements for submitting reviewers thoroughly and completely before preparing an appropriate list of referees to submit on-line with your manuscript.Use the fields below to give us contact information for each suggested reviewer, and please provide the country the person is from in the Reason box for each person. Please note that the journal may not use your suggestions, but your help is appreciated and may speed up the selection of appropriate reviewers.A * indicates a required field.First Name*Middle InitialLast Name*Academic Degree(s)PositionDepartmentInstitution*E-mail Address*Reason*Add ReviewerPrevious nextOppose ReviewersPlease identify anyone who you would prefer not to review this manuscript (e.g. due to a conflict of interest). [More]Please note that this is not a mandatory step.A * indicates a required field.First Name*Middle InitialLast Name*Academic Degree(s)PositionDepartmentInstitutionE-mail Address* Reason*Add Reviewer Previous next。
VLDB Journal manuscript No.(will be inserted by the editor)On Automating Web Services Discovery Boualem Benatallah1,Mohand-Said Hacid2,Alain Leger3, Christophe Rey4,Farouk Toumani41School of Computer Science and Engineering,University of New South Wales, Sydney,Australia(boualem@.au)2LIRIS,Universit´e Lyon I,France(mshacid@liris.univ-lyon1.fr)3France Telecom R&D(alain.leger@)4LIMOS,Universit´e Blaise Pascal,France({rey,ftoumani}@isima.fr) Received:December15,2002/Revised version:dateAbstract One of the challenging problems that web service technology faces is the ability to effectively discover services based on their capabilities. We present an approach to tackle this problem in the context of description logics(DL).We formalize service discovery as a new instance of the problem of rewriting concepts using terminologies.We call this new instance the best covering problem.We provide a formalization of best covering problem in the framework of DL-based ontologies and propose a hypergraph-based algorithm to effectively compute best covers of a given request.We propose a novel matchmaking algorithm that takes as input a service request(or query)Q and an ontology T of services,andfind a set of services called a “best cover”of Q whose descriptions contain as much as possible of common information with Q and as less as possible of extra information with respect to Q.We have implemented the proposed discovery technique and used the developed prototype in the context of Multilingual Knowledge Based European Electronic Marketplace(MKBEEM)project.Keywords:Web services discovery,Semantic matchmaking,Descrip-tion logics,Hypergraphs1IntroductionSemantic web services are emerging as a promising technology for the effec-tive automation of services discovery,combination,and management[31,21, 20].They aim at leveraging two major trends in web technologies,namely web services and semantic web:–Web services built upon XML as vehicle for exchanging messages across applications.The basic technological infrastructure for web services is2Boualem Benatallah et al.structured around three major standards:SOAP(Simple Object Ac-cess Protocol),WSDL(Web Services Description Language),and UDDI (Universal Description,Discovery and Integration)[12,18].These stan-dards provide the building blocks for service description,discovery,and communication.While web services technologies have clearly influenced positively the potential of the web infrastructure by providing program-matic access to information and services,they are hindered by lack of rich and machine-processable abstractions to describe service proper-ties,capabilities,and behavior.As a result of these limitations,very little automation support can be provided to facilitate effective discov-ery,combination,and management of services.Automation support is considered as the cornerstone to provide effective and efficient access to services in large,heterogeneous,and dynamic environments[13,12,20].Indeed,until recently the basic web services infrastructure was mainly used to build simple web services such as those providing information search capabilities to an open audience(e.g.stock quotes,search engine queries,auction monitoring).–Semantic web aims at improving the technology to organize,search,inte-grate,and evolve web-accessible resources(e.g.,web documents,data)by using rich and machine-understandable abstractions for the representa-tion of resources semantics.Ontologies are proposed as means to address semantic heterogeneity among web-accessible information sources and services.They are used to provide meta-data for the effective manipula-tion of available information including discovering information sources and reasoning about their capabilities.Efforts in this area include the de-velopment of ontology languages such as RDF,DAML,and DAML+OIL[16].In the context of web services,ontologies promise to take interop-erability a step further by providing rich description and modeling of services properties,capabilities,and behavior.By leveraging efforts in both web services and semantic web,semantic web services paradigm promises to take web technologies a step further by providing foundations to enable automated discovery,access,combination, and management of web services.Efforts in this area focus on providing rich and machine understandable representation of services properties,ca-pabilities,and behavior as well as reasoning mechanisms to support automa-tion activities[31,14,21,20,15,11].Examples of such efforts include DAML-S[15],WSMF(Web Services Modelling Framework)[21],and METEOR-S (/proj/meteor/SWP.htm).Work in this area is still in its infancy.Many of the objectives of the semantic web services paradigm, such as capability description of service,dynamic service discovery,and goal-driven composition of web services remain to be reached.Our work focuses on the issue of dynamic discovery of web services based on their capabilities.Dynamic service discovery is usually based on the ra-tionale that services are selected,at run-time,based on their properties and capabilities.Our aim is to ground the discovery process on a matchmaking between a requester query and available web service descriptions.We for-On Automating Web Services Discovery3 malize the service discovery approach in the context of Description Logics (DLs)[17].A key aspect of description logics is their formal semantics and reasoning support.DLs provide an effective reasoning paradigm for defining and understanding the structure and semantics of concept description on-tologies.This is essential to provide formal foundations for the envisioned semantic web paradigm[27,29,28].Indeed,DLs have heavily influenced the development of some semantic web ontology languages(e.g.,DAML-OIL or OWL[2]).Our work aims at enhancing the potential of web services by focusing on formal foundations andflexible aspects of their discovery.More specifically,we make the following contributions:–Flexible Matchmaking between Service Descriptions and Re-quests.We propose a matchmaking technique that goes beyond simple subsumption comparaisons between a service request and service adver-tisements.As emphasized in[33],a service discovery mechanism should supportflexible matchmaking since it is unrealistic to expect service requests and service advertisments to exactly match.To cope with this requirement,we propose to use a difference operation on service descrip-tions.Such an operation enables to extract from a subset of web service descriptions the part that is semantically common with a given service request and the part that is semantically different from the request.Knowing the former and the latter allows to effectively select relevant web services.We propose a novel matchmaking algorithm that takes as input a service request(or query)Q and an ontology T of services,and finds a set of services called a“best cover”of Q whose descriptions con-tain as much as possible of common information with Q and as less as possible of extra information with respect to Q.–Concept Rewriting for Effective Service Matchmaking We formalize service matchmaking as a new instance of the problem of rewriting concepts using terminologies[26,6].We call this new in-stance the best covering problem.We provide a formalization of best covering problem in the context of DL-based ontologies and propose a hypergraph-based algorithm to effectively compute best covers of a given request.–Characterization of Service Discovery Automation in DAML-S Service OntologiesWe investigate the reasoning problem associated with service discovery in DAML-S ontologies and its relationship with the expressiveness of the language used to express service descriptions.To study the feasibility of our approach,we have implemented the proposed discovery technique and used the developed prototype in the context of Multilingual Knowl-edge Based European Electronic Marketplace(MKBEEM)project1.14Boualem Benatallah et al. Organization of the PaperThe remainder of this paper is organized as follows:Section2provides an overview of the basic concepts of description logics.Section3describes the formalization of service discovery in the context of DL-based ontologies.Sec-tion4presents the hypergraph-based algorithm for computing best covers. An extension of our approach to accommodate DAML-S ontologies is pre-sented in Section5.Section6describes an implementation of the proposed service discovery technique and discusses some preliminary experimental re-sults.We review related work in Section7and give concluding remarks in Section8.2Description Logics:An OverviewOur approach uses Description Logics(DLs)[4]as a formal framework. DLs are a family of logics that were developed for modeling complex hier-archical structures and to provide a specialized reasoning engine to perform inferences on these structures.The main reasoning mechanisms(e.g.,sub-sumption or satisfiability)are decidable for main description logics([17]). Recently,DLs have heavily influenced the development of the semantic web languages.For example,DAML+OIL,the ontology language used by DAML-S,is in fact an alternative syntax for a very expressive Description Logic[28].In this section,wefirst give the basic definitions,then we describe the notion of difference between descriptions which is the core operation that is being used in our framework.2.1Basic DefinitionsDescription logics allow to represent domain of interest in terms of con-cepts or descriptions(unary predicates)that characterize subsets of the objects(individuals)in the domain,and roles(binary predicates)over such a domain.Concepts are denoted by expressions formed by means of special constructs.Examples of DL constructs considered in this paper are:–the symbol is a concept description which denotes the top concept while the symbol⊥stands for the bottom concept,–concept conjunction( ),e.g.,the concept description parent male denotes the set of fathers(i.e.,male parents),–the universal role quantification(∀R.C),e.g.,the description∀child.male denotes the set of individuals whose children are all male,–the number restriction constructs(≥nR)and(≤nR),e.g.,the descrip-tion(≥1child)denotes the set of parents(i.e.,individuals having at least one child),while the description(≤1Leader)denotes the set of individuals that cannot have more than one leader.On Automating Web Services Discovery 5The various description logics differ from one to another based on the set of constructs they allow.Table 1shows the constructs of two DLs:FL 0and ALN .A concept obtained using the constructs of a description logic L is called an L -concept.The semantics of a concept description is defined Construct nameSyntax Semantics FL 0ALN concept nameP P I ⊆∆I X X top ∆I X X bottom⊥∅X conjunctionC D C I ∩D I X X primitive negation¬P ∆I \P I X universal quantifica-tion∀R.C {x ∈∆I |∀y :(x,y )∈R I →y ∈C I }X X at least number re-striction(≥nR ),n ∈N {x ∈∆I |#{y |(x,y )∈R I }≥n }X at most number re-striction (≤nR ),n ∈N {x ∈∆I |#{y |(x,y )∈R I }≤n }X Table 1Syntax and semantics of some concept-forming constructs.in terms of an interpretation I =(∆I ,·I ),which consists of a nonempty set ∆I ,the domain of the interpretation,and an interpretation function ·I ,which associates to each concept name P ∈C a subset P I of ∆I and to each role name R ∈R a binary relation R I ⊆∆I ×∆I .Additionally,the extension of .I to arbitrary concept descriptions is defined inductively as shown in the third column of Table 1.Based on this semantics,subsumption,equivalence and the notion of a least common subsumer 2(lcs)are defined as follows.Let C,C 1,...,C n and D be concept descriptions:–C is subsumed by D (noted C D )iffC I ⊆D I for all interpretation I .–C is equivalent to D (noted C ≡D )iffC I =D I for all interpretation I .–D is a least common subsumer of C 1,...,C n (noted D =lcs (C 1,...,C n ))iff:(1)C i D for all 1≤i ≤n ,and (2)D is the least concept description with this property,i.e.,if D is aconcept description satisfying C i D for all 1≤i ≤n ,then D D[5].The intentional descriptions contained in a knowledge base built using a description logic is called terminology .The kind of terminologies we consider in this paper are defined below.Definition 1(terminology)Let A be a concept name and C be a concept description.Then A .=C is a concept definition.A terminology T is a finite set of concept definitions such that each concept name occurs at most once in the left-hand side of a definition.2Informally,a least common subsumer of a set of concepts corresponds to the most specific description which subsumes all the given concepts [5].6Boualem Benatallah et al.A concept name A is called a defined concept in the terminology T iffit occurs in the left-hand side of a concept definition in T.Otherwise A is called an atomic concept.An interpretation I satisfies the statement A.=C iffA I=C I.An in-terpretation I is a model for a terminology T if I satisfies all the statements in T.A terminology3built using the constructs of a language L is called an L-terminology.In the sequel,we assume that a terminology T is acyclic,i.e., there do not exist cyclic dependencies between concept definitions.Acyclic terminologies can be unfolded by replacing defined names by their defini-tions until no more defined names occur on the right-hand sides.Therefore, the notion of least common subsumer(lcs)of a set of descriptions can be straightforwardly extended to concepts containing defined names.In this case,we use the expression lcs T(C,D)to denote the least common sub-sumer of the concepts C and D w.r.t.a terminology T(i.e.,the lcs is applied to the unfolded descriptions of C and D).2.2The Difference OperationIn this section,we recall the main results obtained by Teege in[36]regarding the difference operation between two concept descriptions.Definition2(difference operation)Let C,D be two concept descrip-tions with C D.The difference C−D of C and D is defined by C−D:= max{B|B D≡C}The difference of two descriptions C and D is defined as being a description containing all information which is a part of the description C but not a part of the description D.This definition of difference operation requires that the second operand subsumes thefirst one.However,in case the operands C and D are incomparable w.r.t the subsumption relation,then the difference C−D can be given by constructing the least common subsumer of C and D,that is,C−D:=C−lcs(C,D).It is worth noting that,in some description logics,the set C−D may contain descriptions which are not semantically equivalent as illustrated by the following example.Example1Consider the descriptions C .=(∀R.P1) (∀R.¬P1)and D.=(∀R.P2) (∀R.(≤4S)).The set C−D includes,among others,the non-equivalent descriptions(∀R.¬P2)and(∀R.(≥5S)).Teege[36]provides sufficient conditions to characterize the logics where the difference operation is always semantically unique and can be syntactically realized by constructing the set difference of subterms in a conjunction. Some basic notions and important results of this work are introduced below. 3In the sequel,we use the terms terminology and ontology interchangeably.On Automating Web Services Discovery7 Definition3(reduced clause form and structure equivalence)Let L be a description logic.–A clause in L is a description A with the following property:(A≡B A )⇒(B≡ )∨(B≡A).Every conjunction A1 ... A n ofclauses can be represented by the clause set{A1,...,A n}.–A={A1,...,A n}is called a reduced clause set if either n=1,or no clause subsumes the conjunction of the other clauses:∀1≤i≤n:A i A\A i.The set A is then called a reduced clause form(RCF)of every description B≡A1 ... A n.–Let A={A1,...,A n}and B={B1,...,B m}be reduced clause sets ina description logic L.A and B are structure equivalent(denoted byA∼=B)iff:n=m∧∀1≤i≤n∃1≤j,k≤n:A i≡B j∧B i≡A k–If in a description logic for every description all its RCFs are structure equivalent,we say that RCFs are structurally unique in that logic. The structural difference operation is defined as being the set difference be-tween clause sets where clauses are compared on the basis of the equivalence relationship.Let us now introduce the notion of structural subsumption as defined in [36].Definition4The subsumption relation in a description logic L is said structural ifffor any clause A∈L and any description B=B1 ... B m∈L which is given by its RCF,the following holds:A B⇔∃1≤i≤m:A B i[36]provides two interesting results:1)in description logics with struc-turally unique RCFs,the difference operation can be straightforwardly de-termined using the structural difference operation,and2)structural sub-sumption is a sufficient condition for a description logic to have structurally unique RCFs.Consequently,structural subsumption is a sufficient condition that allows to identify logics where the difference operation is semantically unique and can be implemented using the structural difference operation. However,it is worth noting that the definition of structural subsumption given in[36]is different from the one usually used in the literature.Un-fortunately,a consequence of this remark is that many description logics for which a structural subsumption algorithm exists(e.g.,ALN[32])do not have structurally unique RCFs.Nevertheless,the result given in[36] is still interesting in practice since there exists several description logics which satisfy this property.Examples of such logics include the language FL0∪(≥n R),that we have used in the context of the MKBEEM project, or the more powerful description logic L1[36],which contains the following constructs:– , , ,⊥,(≥nR),existential role quantification(∃R.C)and existential feature quantification(∃f.C)for concepts,where C denotes a concept, R a role and f a feature(i.e.,a functional role),8Boualem Benatallah et al.–bottom(⊥),composition(◦),differentiation(|)for roles,–bottom(⊥)and composition(◦)for features.In the remainder of this paper we use the term structural subsumption in the sense of[36].Size of a description Let L be a description logic with structural sub-sumption.We define the size|C|of an L-concept description C as being the number of clauses in its RCFs4.If necessary,a more precise measure of a size of a description can be defined by taking into account the size of each clause(e.g.,by counting the number of occurrences of concept and role names in each clause).However,in this case one must use some kind of canonical form to deal with the problem of different descriptions of equiva-lent clauses.It should be noted that,in a description logic with structurally unique RCFs it is often possible to define a canonical form which is itself an RCF[36].3Formalization of the Best Covering ProblemIn this section,wefirst formalize the best covering problem in the framework of description logics with structural subsumption.Then we describe how to compute best covers using a hypergraph-based algorithm.3.1Problem StatementLet usfirst introduce some basic definitions that are required to formally define the best covering problem.Let L be a description logic with structural subsumption,T be an L-terminology,and Q≡⊥be a coherent L-concept description.The set of defined concepts occurring in T is denoted as S T= {S i,i∈[1,n]}with S i≡⊥,∀i∈[1,n].In the sequel,we assume that concept descriptions S i,i∈[1,n]are represented by their RCFs.Definition5(cover)A cover of Q using T is a conjunction E of some names S i from T such that:Q−lcs T(Q,E)≡Q.Hence,a cover of a concept Q using T is defined as being any conjunction of concepts occurring in T which shares some common information with Q. It is worth noting that a cover E of Q is always consistent with Q(i.e., Q E≡⊥)since L is a description logic with structurally unique RCFs5 and Q≡⊥and S i≡⊥,∀i∈[1,n].4We recall that,since L have structurally unique RCFs,all the RCFs of an L-description are equivalent and thus have the same number of clauses.5If the language L contains the incoherent concept⊥,then⊥must be a clause, i.e.,non trivial decompositions of⊥is not possible(that means we cannot have incoherent conjunction of coherent clauses),otherwise it is easy to show that L does not have structurally unique RCFs.On Automating Web Services Discovery9 To define the notion of best cover,we need to characterize the part of the description of a cover E that is not contained in the description of the query Q,and the part of the query Q that is not contained in the description of its cover E.Definition6(rest and miss)Let Q be an L-concept description and E a cover of Q using T.The rest of Q with respect to E,denoted as Rest E(Q), is defined as follows:Rest E(Q)≡Q−lcs T(Q,E).The missing information of Q with respect to E,denoted as Miss E(Q),is defined as follows:Miss E(Q)≡E−lcs T(Q,E).Now we can define the notion of best cover.Definition7(best cover)A concept description E is called a best cover of Q using a terminology T iff:–E is a cover of Q using T,and–there doesn’t exist a cover E of Q using T such that(|Rest E (Q)|,|Miss E (Q)|)<(|Rest E(Q)|,|Miss E(Q)|),where<is the lexicographic order operator.A best cover is defined as being a cover that has,first,the smallest rest and,second,the smallest miss.The best covering problem,denoted BCOV(T,Q),is then the problem of computing all the best covers of Q using T.Theorem1(Complexity of BCOV(T,Q))The best covering problem is NP-hard.The proof of this theorem follows from the results regarding the minimal rewriting problem[6](see[25]for a detailed proof).3.2Mapping Best Covers to Hypergraph TransversalsLet usfirst recall some necessary definitions regarding hypergraphs.Definition8(hypergraph and transversals)[19]A hypergraph H is a pair(Σ,Γ)of afinite setΣ={V1,...,V n}and a set Γof subsets ofΣ.The elements ofΣare called vertices,and the elements ofΓare called edges.A set T⊆Σis a transversal of H if for eachε∈Γ, T∩ε=∅.A transversal T is minimal if no proper subset T of T is a transversal.The set of the minimal transversals of an hypergraph H is noted T r(H).In this section,we describe how to express the best covering problem as the problem offinding the minimal transversals with a minimal cost of a given hypergraph.10Boualem Benatallah et al.Definition 9(hypergraph generation)Let L be a description logic with structural subsumption,T be an L -terminology,and Q be an L -concept description.Given an instance BCOV (T ,Q )of the best covering problem,we build a hypergraph H T Q =(Σ,Γ)as follows:–each concept name S i in T is associated with a vertex V S i in the hyper-graph H T Q .Thus Σ={V S i ,i ∈[1,n ]}.–each clause A i ∈Q ,for i ∈[1,k ],is associated with an edge in H T Q ,noted w A i ,with w A i ={V S i |S i ∈S T and A i ∈≡lcs T (Q,S i )}where ∈≡stands for the membership test modulo equivalence of clauses and lcs T (Q,S i )is given by its RCF.Notation For the sake of clarity we introduce the following notation.For any set of vertices X ={V S i },subset of Σ,we use the expression E X ≡ V S i ∈X S i to denote the concept obtained from the conjunction of concept names corresponding to the vertices in X .Inversely,for any concept E ≡ j ∈[1,m ]S i j ,we use the expression X E ={V S i j ,j ∈[1,m ]}to denote the set of vertices corresponding to the concept names in E .Lemmas 1and 2given below,say that computing a cover of Q using T that minimizes the rest amounts to computing a transversal of H T Q by considering only the non empty edges.Proofs of these lemmas are presented in [25].Lemma 1(characterization of the minimal rest)Let L be a descrip-tion logic with structural subsumption,T be an L -terminology,and Q be an L -concept description.Let H T Q =(Σ,Γ)be the hypergraph built from the terminology T and the concept Q =A 1 ... A k provided by its RCF.The minimal rest (i.e.,the rest whose size is minimal)of rewriting Q using T is:Rest min ≡A j 1 ... A j l ,∀j i ∈[1,k ]|w A j i =∅.From the previous lemma,we know that the minimal rest of rewriting a query Q using T is always unique and is equivalent to Rest min .Lemma 2(characterization of covers that minimize the rest)Let H T Q =(Σ,Γ )be the hypergraph built by removing the empty edges from H T Q .A rewriting E ≡S i 1 ... S i m ,with 1≤m ≤n and S i j ∈S T for 1≤j ≤m ,is a cover of Q using T that minimizes the rest iffX E ={V S i j ,j ∈[1,m ]}is a transversal of HT Q .This lemma characterizes the covers that minimize the rest.Conse-quently,computing the best covers will consist of determining,from those covers,the ones that minimize the miss.To express miss minimization in the hypergraphs framework,we introduce the following notion of cost.Definition 10(cost of a set of vertices)Let BCOV (T ,Q )be an instance of the best covering problem and H T Q =(Σ,Γ )its associated hypergraph.The cost of the set of vertices X is defined as follows:cost (X )=|Miss E X (Q )|.ToTravel≡(≥1departurePlace) (∀departurePlace.Location) (≥1arrivalPlace) (∀arrivalPlace.Location) (≥1arrivalDate)(∀arrivalDate.Date) (≥1arrivalTime) (∀arrival-Time.Time)FromTravel≡(≥1departurePlace) (∀departurePlace.Location) (≥1arrivalPlace) (∀arrivalPlace.Location) (≥1departure-Date) (∀departureDate.Date) (≥1departureTime)(∀departureTime.Time)Hotel≡Accommodation (≥1destinationPlace) (∀destina-tionPlace.Location) (≥1checkIn) (∀checkIn.Date)(≥1checkOut) (∀checkOut.Date) (≥1nbAdults)(∀nbAdults.Integer) (≥1nbChildren) (∀nbChil-dren.Integer)Table2Example of a terminology.Therefore,the BCOV(T,Q)problem can be reduced to the computation of the transversals with minimal cost of the hypergraph H T Q.It is clear that,it is only interesting to consider minimal transversals.In a nutshell, the BCOV(T,Q)problem can be reduced to the computation of the minimal transversals with minimal cost of the hypergraph H T Q.Therefore,we can reuse and adapt known techniques for computing minimal transversals(e.g., see[10,30,19])for solving the best covering problem.3.3Illustrating ExampleTo illustrate the best covering problem,let us consider a terminology T containing the following concepts(web services):–T oT ravel:allows to search for trips given a departure place,an arrival place,an arrival date and an arrival time,–F romT ravel:allows to search for trips given a departure place,an arrival place,a departure date and a departure time,–Hotel:allows to search for hotels given a destination place,the check-in date,the check-out date,the number of adults and the number of children.The terminology T,depicted in Table2,is described using the descrip-tion logic FL0∪{≥n R}6.Let us consider now,the following query description:Q≡(≥1departurePlace) (∀departurePlace.Location) (≥1ar-rivalPlace) (∀arrivalPlace.Location) (≥1departureDate)(∀departureDate.Date) Accommodation (≥1destina-tionPlace) (∀destinationPlace.Location) (≥1checkIn)(∀checkIn.Date) (≥1checkOut) (∀checkOut.Date)carRental6We note FL0∪(≥nR)the description logic FL0augmented with the construct (≥n R).Place.LocationFig.1Example of a hypergraph.We assume that the concept names(e.g.,Location,Date,Accommoda-tion),that appear in the description of the query Q and/or in the con-cept descriptions of T,are all atomic concepts.Hence,the query Q and the concepts of T are all provided by their RCFs7.Therefore,the asso-ciated hypergraph H T Q=(Σ,Γ)consists of the set of verticesΣ= {V T oT ravel,V F romT ravel,V Hotel}and the set of edges:Γ={w(≥1departureP lace),w(∀departureP lace.Location),w(≥1arrivalP lace), w(∀arrivalP lace.Location),w(≥1departureDate),w(∀departureDate.Date),w Accommodation,w(≥1destinationP lace),w(∀destinationP lace.Location),w(≥1checkIn),w(∀checkIn.Date),w(≥1checkOut),w(∀checkOut.Date),w carRental}The hypergraph H T Q=(Σ,Γ)is depicted in Figure1.We can see that no concept covers the clause corresponding to the edge w carRental(as we have w carRental=∅).Since this is the only empty edge inΓ,the best covers of Q using T will have exactly the following rest:Rest min≡carRental. Now,considering the hypergraph H T Q,the only minimal transversal is: X={V F romT ravel,V Hotel}.So,E X≡Hotel F romT ravel is the best cover of Q using the terminology T.The size of the missing information of E X is obtained from the transversal X as shown below:cost(X)=|Miss F romT ravel Hotel(Q)|=|(≥1departureTime) (∀depar-tureTime.Time) (≥1nbAdults) (∀nbAdults.Integer) (≥1nbChil-dren) (∀nbChildren.Integer)|=6.In this example,we do not consider this cost because the hypergraph HT Q has only one minimal transversal.7Otherwise,we have to recursively unfold the concept(resp.query)description by replacing by its definition each concept name appearing in the concept(resp. query)description.。