Temporal and Spectral Characteristics of Short Bursts from the Soft Gamma Repeaters 1806-20
- 格式:pdf
- 大小:413.80 KB
- 文档页数:24
自相关仪光路原理The principle of autocorrelation in an optical path refers to the phenomenon where the light, after being emitted from a source and experienced reflection or refraction within the same system, overlaps with its original path. 自相关仪光路原理指的是光线从光源发出后经过系统内的反射或折射,与原始光路发生重叠的现象。
This phenomenon is of great significance in various fields, from physics to engineering, as it can be used to measure the coherence of light, determine the distance between objects, and even assess the quality of optical components. 这种现象在各个领域都具有重要意义,从物理学到工程学都有着广泛的应用,可以用来测量光的相干性,确定物体之间的距离,甚至评估光学元件的质量。
In the field of interferometry, the principle of autocorrelation is used to analyze the interference pattern produced by a beam of light split into two paths and recombined. This allows for precise measurements of small displacements, vibrations, and deformations in the objects being examined. 在干涉仪领域,自相关原理被用来分析光束分裂成两个路径并重新组合产生的干涉图案,从而可以精确测量被检测物体的微小位移、振动和变形。
美国加州Ridgecrest地震的地震动特性分析作者:张琪陈希郑向远来源:《湖南大学学报·自然科学版》2021年第01期摘要:為研究同一地震序列中两场震源相近,发震时间间隔较短的主要地震活动中获取的地震动时频特性的异同,选取了2019年7月美国加州Ridgecrest地震序列中震级分别为MW 6.4和MW7.1的两场地震,对比研究了两次地震中地震动参数随着震中距的衰减趋势,并与俞言祥模型进行了对比;讨论了这两个相近事件中地震动三要素(峰值加速度、反应谱和持时)的异同,重点分析了两条不利地震动的反应谱;通过希尔伯特-黄变换(HHT)获得地震动的HHT谱,分析了地震动能量在时间和频率成分上的分布特征. 结果表明:大部分地震动参数的衰减趋势与俞言祥模型吻合较好;两场地震的反应谱基本相似;地震动HHT谱最大能量所对应的瞬时频率和地震动时程峰值加速度所在循环的峰-谷频率很接近;两次大地震相继发生可能会对结构造成更大的损坏.关键词:地震序列;地震动特性;衰减关系;地震动三要素;希尔伯特-黄变换中图分类号:P315.9 文献标志码:A文章编号:1674—2974(2021)01—0108—09Abstract:In order to figure out the different temporal and spectral characteristics between two earthquakes with close hypocenter and short time interval in the same seismic sequence, two main shocks,MW 6.4 and MW 7.1 among the 2019 Ridgecrest earthquakes at California USA in July are selected. The attenuation trend of ground motions parameters with the change of epicentral distance for these two earthquakes is compared with Yu's model. The difference of three key elements of ground motions between these two earthquakes,including peak ground acceleration, response spectrum and duration,are analyzed. The potential seismic damage to structures is discussed by the response spectrum analysis of the selected two severest ground motions. The Hilbert-Huang transform (HHT) is adopted to obtain the HHT spectrum of ground motions for identifying the energy distribution in temporal and spectral domains. It shows that the attenuation trend of most of data agrees with Yu’s model. The response spectra of the two earthquake events are similar. The instantaneous frequency corresponding to the largest energy of HHT spectrum is close to peak-trough frequency corresponding to the time series cycle with peak ground acceleration. Structures may be severely damaged subjected to two earthquakes occurred successively.Key words:seismic sequence;ground motion characteristics;attenuation trend;three key elements of a seismic motion;Hilbert-Huang transform基于地震动工程特性,选取适当的地震动进行结构物的动力响应分析,对于土木工程结构抗震设计和安全评估具有重要意义. 目前世界范围内各地震活跃国家或地区正逐渐建立起覆盖整个区域的强震观测台网,这些观测台网的布设为地震动研究提供了丰富的数据来源. 长期以来,地震动的研究主要集中在研究地震动三要素(峰值加速度、反应谱和持时)等时频域工程特性,以及基于地震动数据研究不同场地条件下的地震动衰减关系等.近年来,基于实际地震动数据,冀昆等人[1]结合震害调查,对云南鲁甸MS 6.5地震从幅值特征、衰减关系等方面对地震动参数加以分析. 在此基础上,戴嘉伟等[2]将云南鲁甸MS 6.5和云南景谷MS 6.6地震进行对比,发现鲁甸地震地震动参数衰减快于景谷地震,该现象可能与Q值(介质品质因子)的区域差异性相关. 王恒知等[3]采用H/V单台谱比法分析了地震的场地放大效应,表明台站场地对地震动存在明显的放大现象. 夏坤等[4]对汶川地震部分台站记录进行分析,研究了传播距离和场地条件对远场地震动的影响. 国内外多年来涌现出与上述成果类似的研究[5-6],不一一列出.2019年7月4日和7月6日,在美国加州同一地点相隔不到34 h相继发生6.4级和7.1级强震,是一个较为特殊的事件. 这两次强震同属于一个地震断裂带,且震源相近,属于同一个地震序列. 因此,对这两个事件的地震动特征及其影响进行梳理和分析研究具有重要意义. 本文基于美国工程强震数据中心(CESMD)获取的地震动信息,首先对地震动衰减关系进行研究;其次,鉴于它们震源相近,本文还对比研究了两次地震动PGA、反应谱和持时等时频域特性的差异,并利用希尔伯特-黄变换研究了地震动能量在时-频域的分布特征.1 数据1.1 地震信息2019年7月4日10时33分,美国加利福尼亚州南部科恩县里奇克莱斯特(Ridgecrest)附近发生6.4级地震,震中位于瑟尔斯谷西南12 km处,震源深度10.7 km. 相隔不到34 h,该地区再次发生强震,震级达7.1级,震源深度8.0 km,震中与早前发生的6.4级地震震中十分接近,仅相距17 km. 虽然二者震级仅相差0.7级,但基本上不属于双震型事件. 目前一致的看法是,后者为主震,前者为前震[7]. 事实上,在二者之间还发生过一个5.4级前震.这是加州近20年来破坏性最强的两次地震,均发生在长约45 km、宽约15 km的利特尔莱克断裂带上,震中附近地表出现大量裂痕及偏移,房屋倒塌,道路损毁,甚至引起火灾等次生灾害,造成1人死亡,数十人受伤,及超过50亿美元的较严重经济损失[8-9]. 图1[10]和图2[11]展示了由7.1级地震造成的地表开裂和房屋损坏情况. 同时,距离震中约300 km的洛杉矶和周边城市以及内华达州拉斯维加斯都有明显震感.1.2 数据选取本文中强震动记录数据来源于美国工程强震数据中心(CESMD). CESMD自7月4日至7月11日,包括主震及次大地震(7.1级和6.4级)在内,共记录到105次地震(≥MW 3),剔除震源深度小于等于0(可认为是地面塌陷或人为引起的震动)的数据后,得到94次地震. 各地震震中分布如图3所示. 图4所示为各次地震震级随时间变化曲线,从图中可看出,在6.4级的次大地震和7.1级的主震之间发生的地震,震级主要集中在MW 3 ~ 4.5,余震震级主要集中在MW 4 ~ 5.5. 图5为地震震级与相应的震源深度分布散点图,从图中可知,震源深度主要集中于0~13 km,均属于浅源地震(震源深度小于70 km),因此对地面建筑物带来的破坏较为严重.2 衰减关系6.4级和7.1级地震震级均较大且相差仅0.7级,在同一地震序列中能量较接近,因此以这两次地震动记录为例,进行地震动参数衰减关系的讨论. 在给定震级、震中距等条件下,利用地震动参数的衰减曲线能够对地震动进行参数估计,从而用这些估计参数作为结构抗震和安全评估计算的输入[12-15].鉴于这两次地震在加州的台站均有记录,本文分别选取了两次地震动中峰值加速度(PGA)和峰值速度(PGV)较大的地震动记录,7.1级地震中选取前49条水平地震动记录,6.4级地震中选取前34条水平地震动记录,并分别与已有模型对比. 该模型为俞言祥[16]基于美国西部NGA 强震数据库建立的基岩场地水平衰减关系模型,结果如图6所示.从图6中可看出,对于7.1级和6.4级地震的PGA和PGV的实际记录值,俞言祥模型曲线基本从这些散点中间穿过. 这些记录数据能够较好地反映出PGA和PGV随震中距的增大而衰减的趋势. 其中,6.4级地震对应的PGA、PGV的残差平方和(SSR)均小于7.1级地震,如表1所示. 这说明该模型对6.4级地震的地震动参数预测更加准确. 此外,值得注意的是,在7.1级地震中,震中距R=34.5 km的台站CLC,其PGA和PGV均远远高于俞言祥模型的预测值,造成此差异的原因有二:1)台站CLC虽然震中距较大,但是其断层距Rrup = 2.8 km远远小于其余臺站,说明该台站与发震断层十分接近,地震波传播路径较短,因而地震动参数较大;2)台站CLC所处场地条件为C类(软基岩),不满足俞言祥模型的基岩场地条件,可能存在较大程度的场地放大效应,因而地震动参数较大.3 地震动三要素为了研究该地震活动中地震动的时频域特性,本文从6.4级和7.1级两次较大地震中分别选择了137条和126条地震动记录,并按照震中距的大小分为<100 km,100~200 km以及>200 km三类(见表2),研究两次主要地震中震中距所造成的差异.3.1 PGA和持时表3给出了6.4级和7.1级地震中不同震中距下水平和竖向地震动峰值加速度(PGA)平均值,同时也给出了竖向和水平PGA的比值关系. 从表3中可看出在同一次地震中,随着震中距的扩大,无论是水平还是竖向地震动PGA都减小,这与PGA随震中距变化的一般规律相同[17]. 同时,竖向和水平PGA的比值PGAV /PGAH也随着震中距变大而减小. 对于不同震级(MW 6.4 vs. 7.1)同一震中距分类地震动而言,震级越大则PGA越大;另外,震级增大时,PGAV/PGAH也会增大.表4列出了6.4和7.1级地震中不同震中距下水平和竖向地震动持时的平均值. 本文采用90%能量持时[18-20]来确定地震动持续时间(一般又称为强震持时),是由于这种方法能够更充分地反映地震动的原始特征. 该持时定义为地震动能量从总能量的5%累积到95%所经历的时间,见式(1)和式(2).式中:T为总持时,其分为3段,分别是0~T1,T1~T2和T2 ~ T,T1、T2分别是总能量的5%和95%所对应的时间点,Td表示90%能量持时. 从表4中可以看出,随着震中距增加,地震动加速度持时也相应增加,尤其在震中距大于200 km时,无论是水平向还是竖向地震动,加速度持时增幅最大. 另外,对于竖向和水平持时比值TV /TH,震中距在100 ~ 200 km内最大,震中距小于100 km最小. 值得注意的是,与PGA水平和竖向数值大小关系不同的是,对于地震动90%能量持时而言,竖向持时始终大于水平向持时. 不同的震级也会影响持时大小,在本地震事件中,震级大地震动持时也变大,但是TV /TH却略有减小.2019年7月4日和7月6日,在美国加州同一地点相隔不到34 h相继发生6.4级和7.1级强震,是一个较为特殊的事件. 这两次强震同属于一个地震断裂带,且震源相近,属于同一个地震序列. 因此,对这两个事件的地震动特征及其影响进行梳理和分析研究具有重要意义. 本文基于美国工程强震数据中心(CESMD)获取的地震动信息,首先对地震动衰减关系进行研究;其次,鉴于它们震源相近,本文还对比研究了两次地震动PGA、反应谱和持时等时频域特性的差异,并利用希尔伯特-黄变换研究了地震动能量在时-频域的分布特征.1 数据1.1 地震信息2019年7月4日10时33分,美国加利福尼亚州南部科恩县里奇克莱斯特(Ridgecrest)附近发生6.4级地震,震中位于瑟尔斯谷西南12 km处,震源深度10.7 km. 相隔不到34 h,该地区再次发生强震,震级达7.1级,震源深度8.0 km,震中与早前发生的6.4级地震震中十分接近,仅相距17 km. 虽然二者震级仅相差0.7级,但基本上不属于双震型事件. 目前一致的看法是,后者為主震,前者为前震[7]. 事实上,在二者之间还发生过一个5.4级前震.这是加州近20年来破坏性最强的两次地震,均发生在长约45 km、宽约15 km的利特尔莱克断裂带上,震中附近地表出现大量裂痕及偏移,房屋倒塌,道路损毁,甚至引起火灾等次生灾害,造成1人死亡,数十人受伤,及超过50亿美元的较严重经济损失[8-9]. 图1[10]和图2[11]展示了由7.1级地震造成的地表开裂和房屋损坏情况. 同时,距离震中约300 km的洛杉矶和周边城市以及内华达州拉斯维加斯都有明显震感.1.2 数据选取本文中强震动记录数据来源于美国工程强震数据中心(CESMD). CESMD自7月4日至7月11日,包括主震及次大地震(7.1级和6.4级)在内,共记录到105次地震(≥MW 3),剔除震源深度小于等于0(可认为是地面塌陷或人为引起的震动)的数据后,得到94次地震. 各地震震中分布如图3所示. 图4所示为各次地震震级随时间变化曲线,从图中可看出,在6.4级的次大地震和7.1级的主震之间发生的地震,震级主要集中在MW 3 ~ 4.5,余震震级主要集中在MW 4 ~ 5.5. 图5为地震震级与相应的震源深度分布散点图,从图中可知,震源深度主要集中于0~13 km,均属于浅源地震(震源深度小于70 km),因此对地面建筑物带来的破坏较为严重.2 衰减关系6.4级和7.1级地震震级均较大且相差仅0.7级,在同一地震序列中能量较接近,因此以这两次地震动记录为例,进行地震动参数衰减关系的讨论. 在给定震级、震中距等条件下,利用地震动参数的衰减曲线能够对地震动进行参数估计,从而用这些估计参数作为结构抗震和安全评估计算的输入[12-15].鉴于这两次地震在加州的台站均有记录,本文分别选取了两次地震动中峰值加速度(PGA)和峰值速度(PGV)较大的地震动记录,7.1级地震中选取前49条水平地震动记录,6.4级地震中选取前34条水平地震动记录,并分别与已有模型对比. 该模型为俞言祥[16]基于美国西部NGA 强震数据库建立的基岩场地水平衰减关系模型,结果如图6所示.从图6中可看出,对于7.1级和6.4级地震的PGA和PGV的实际记录值,俞言祥模型曲线基本从这些散点中间穿过. 这些记录数据能够较好地反映出PGA和PGV随震中距的增大而衰减的趋势. 其中,6.4级地震对应的PGA、PGV的残差平方和(SSR)均小于7.1级地震,如表1所示. 这说明该模型对6.4级地震的地震动参数预测更加准确. 此外,值得注意的是,在7.1级地震中,震中距R=34.5 km的台站CLC,其PGA和PGV均远远高于俞言祥模型的预测值,造成此差异的原因有二:1)台站CLC虽然震中距较大,但是其断层距Rrup = 2.8 km远远小于其余台站,说明该台站与发震断层十分接近,地震波传播路径较短,因而地震动参数较大;2)台站CLC所处场地条件为C类(软基岩),不满足俞言祥模型的基岩场地条件,可能存在较大程度的场地放大效应,因而地震动参数较大.3 地震动三要素为了研究该地震活动中地震动的时频域特性,本文从6.4级和7.1级两次较大地震中分别选择了137条和126条地震动记录,并按照震中距的大小分为<100 km,100~200 km以及>200 km三类(见表2),研究两次主要地震中震中距所造成的差异.3.1 PGA和持时表3给出了6.4级和7.1级地震中不同震中距下水平和竖向地震动峰值加速度(PGA)平均值,同时也给出了竖向和水平PGA的比值关系. 从表3中可看出在同一次地震中,随着震中距的扩大,无论是水平还是竖向地震动PGA都减小,这与PGA随震中距变化的一般规律相同[17]. 同时,竖向和水平PGA的比值PGAV /PGAH也随着震中距变大而减小. 对于不同震级(MW 6.4 vs. 7.1)同一震中距分类地震动而言,震级越大则PGA越大;另外,震级增大时,PGAV/PGAH也会增大.表4列出了6.4和7.1级地震中不同震中距下水平和竖向地震动持时的平均值. 本文采用90%能量持时[18-20]来确定地震动持续时间(一般又称为强震持时),是由于这种方法能够更充分地反映地震动的原始特征. 该持时定义为地震动能量从总能量的5%累积到95%所经历的时间,见式(1)和式(2).式中:T为总持时,其分为3段,分别是0~T1,T1~T2和T2 ~ T,T1、T2分别是总能量的5%和95%所对应的时间点,Td表示90%能量持时. 从表4中可以看出,随着震中距增加,地震动加速度持时也相应增加,尤其在震中距大于200 km时,无论是水平向还是竖向地震动,加速度持时增幅最大. 另外,对于竖向和水平持时比值TV /TH,震中距在100 ~ 200 km内最大,震中距小于100 km最小. 值得注意的是,与PGA水平和竖向数值大小关系不同的是,对于地震动90%能量持时而言,竖向持时始终大于水平向持时. 不同的震级也会影响持时大小,在本地震事件中,震级大地震动持时也变大,但是TV /TH却略有减小.2019年7月4日和7月6日,在美国加州同一地点相隔不到34 h相继发生6.4级和7.1级强震,是一个较为特殊的事件. 这两次强震同属于一个地震断裂带,且震源相近,属于同一个地震序列. 因此,对这两个事件的地震动特征及其影响进行梳理和分析研究具有重要意义. 本文基于美国工程强震数据中心(CESMD)获取的地震动信息,首先对地震动衰减关系进行研究;其次,鉴于它们震源相近,本文还对比研究了两次地震动PGA、反应谱和持时等时频域特性的差异,并利用希尔伯特-黄变换研究了地震动能量在时-频域的分布特征.1 数据1.1 地震信息2019年7月4日10时33分,美国加利福尼亚州南部科恩县里奇克莱斯特(Ridgecrest)附近发生6.4级地震,震中位于瑟尔斯谷西南12 km处,震源深度10.7 km. 相隔不到34 h,该地区再次发生强震,震级达7.1级,震源深度8.0 km,震中与早前发生的6.4级地震震中十分接近,仅相距17 km. 虽然二者震级仅相差0.7级,但基本上不属于双震型事件. 目前一致的看法是,后者为主震,前者为前震[7]. 事实上,在二者之间还发生过一个5.4级前震.这是加州近20年来破坏性最强的两次地震,均发生在长约45 km、宽约15 km的利特尔莱克断裂带上,震中附近地表出现大量裂痕及偏移,房屋倒塌,道路损毁,甚至引起火灾等次生灾害,造成1人死亡,数十人受伤,及超过50亿美元的较严重经济损失[8-9]. 图1[10]和图2[11]展示了由7.1级地震造成的地表开裂和房屋损坏情况. 同时,距离震中约300 km的洛杉矶和周边城市以及内华达州拉斯维加斯都有明显震感.1.2 数据选取本文中强震动记录数据来源于美国工程强震数据中心(CESMD). CESMD自7月4日至7月11日,包括主震及次大地震(7.1级和6.4级)在内,共记录到105次地震(≥MW 3),剔除震源深度小于等于0(可认为是地面塌陷或人为引起的震动)的数据后,得到94次地震. 各地震震中分布如图3所示. 图4所示为各次地震震级随时间变化曲线,从图中可看出,在6.4级的次大地震和7.1级的主震之间发生的地震,震级主要集中在MW 3 ~ 4.5,余震震级主要集中在MW 4 ~ 5.5. 图5为地震震级与相应的震源深度分布散点图,从图中可知,震源深度主要集中于0~13 km,均属于浅源地震(震源深度小于70 km),因此对地面建筑物带来的破坏较为严重.2 衰减关系6.4级和7.1级地震震级均较大且相差仅0.7级,在同一地震序列中能量较接近,因此以这两次地震动记录为例,进行地震动参数衰减关系的讨论. 在给定震级、震中距等条件下,利用地震动参数的衰减曲线能够对地震动进行参数估计,从而用这些估计参数作为结构抗震和安全评估计算的输入[12-15].鉴于这两次地震在加州的台站均有记录,本文分别选取了两次地震动中峰值加速度(PGA)和峰值速度(PGV)较大的地震动记录,7.1级地震中选取前49条水平地震动记录,6.4级地震中选取前34条水平地震动记录,并分别与已有模型对比. 该模型为俞言祥[16]基于美国西部NGA 强震数据库建立的基岩场地水平衰减关系模型,结果如图6所示.从图6中可看出,对于7.1级和6.4级地震的PGA和PGV的实际记录值,俞言祥模型曲线基本从这些散点中间穿过. 这些记录数据能够较好地反映出PGA和PGV随震中距的增大而衰减的趋势. 其中,6.4级地震对应的PGA、PGV的残差平方和(SSR)均小于7.1級地震,如表1所示. 这说明该模型对6.4级地震的地震动参数预测更加准确. 此外,值得注意的是,在7.1级地震中,震中距R=34.5 km的台站CLC,其PGA和PGV均远远高于俞言祥模型的预测值,造成此差异的原因有二:1)台站CLC虽然震中距较大,但是其断层距Rrup = 2.8 km远远小于其余台站,说明该台站与发震断层十分接近,地震波传播路径较短,因而地震动参数较大;2)台站CLC所处场地条件为C类(软基岩),不满足俞言祥模型的基岩场地条件,可能存在较大程度的场地放大效应,因而地震动参数较大.3 地震动三要素为了研究该地震活动中地震动的时频域特性,本文从6.4级和7.1级两次较大地震中分别选择了137条和126条地震动记录,并按照震中距的大小分为<100 km,100~200 km以及>200 km三类(见表2),研究两次主要地震中震中距所造成的差异.3.1 PGA和持时表3给出了6.4级和7.1级地震中不同震中距下水平和竖向地震动峰值加速度(PGA)平均值,同时也给出了竖向和水平PGA的比值关系. 从表3中可看出在同一次地震中,随着震中距的扩大,无论是水平还是竖向地震动PGA都减小,这与PGA随震中距变化的一般规律相同[17]. 同时,竖向和水平PGA的比值PGAV /PGAH也随着震中距变大而减小. 对于不同震级(MW 6.4 vs. 7.1)同一震中距分类地震动而言,震级越大则PGA越大;另外,震级增大时,PGAV/PGAH也会增大.表4列出了6.4和7.1级地震中不同震中距下水平和竖向地震动持时的平均值. 本文采用90%能量持时[18-20]来确定地震动持续时间(一般又称为强震持时),是由于这种方法能够更充分地反映地震动的原始特征. 该持时定义为地震动能量从总能量的5%累积到95%所经历的时间,见式(1)和式(2).式中:T为总持时,其分为3段,分别是0~T1,T1~T2和T2 ~ T,T1、T2分别是总能量的5%和95%所对应的时间点,Td表示90%能量持时. 从表4中可以看出,随着震中距增加,地震动加速度持时也相应增加,尤其在震中距大于200 km时,无论是水平向还是竖向地震动,加速度持时增幅最大. 另外,对于竖向和水平持时比值TV /TH,震中距在100 ~ 200 km内最大,震中距小于100 km最小. 值得注意的是,与PGA水平和竖向数值大小关系不同的是,对于地震动90%能量持时而言,竖向持时始终大于水平向持时. 不同的震级也会影响持时大小,在本地震事件中,震级大地震动持时也变大,但是TV /TH却略有减小.2019年7月4日和7月6日,在美国加州同一地点相隔不到34 h相继发生6.4级和7.1级强震,是一个较为特殊的事件. 这两次强震同属于一个地震断裂带,且震源相近,属于同一个地震序列. 因此,对这两个事件的地震动特征及其影响进行梳理和分析研究具有重要意义. 本文基于美国工程强震数据中心(CESMD)获取的地震动信息,首先对地震动衰减关系进行研究;其次,鉴于它们震源相近,本文还对比研究了两次地震动PGA、反应谱和持时等时频域特性的差异,并利用希尔伯特-黄变换研究了地震动能量在时-频域的分布特征.1 数据1.1 地震信息2019年7月4日10时33分,美国加利福尼亚州南部科恩县里奇克莱斯特(Ridgecrest)附近發生6.4级地震,震中位于瑟尔斯谷西南12 km处,震源深度10.7 km. 相隔不到34 h,该地区再次发生强震,震级达7.1级,震源深度8.0 km,震中与早前发生的6.4级地震震中十分接近,仅相距17 km. 虽然二者震级仅相差0.7级,但基本上不属于双震型事件. 目前一致的看法是,后者为主震,前者为前震[7]. 事实上,在二者之间还发生过一个5.4级前震.这是加州近20年来破坏性最强的两次地震,均发生在长约45 km、宽约15 km的利特尔莱克断裂带上,震中附近地表出现大量裂痕及偏移,房屋倒塌,道路损毁,甚至引起火灾等次生灾害,造成1人死亡,数十人受伤,及超过50亿美元的较严重经济损失[8-9]. 图1[10]和图2[11]展示了由7.1级地震造成的地表开裂和房屋损坏情况. 同时,距离震中约300 km的洛杉矶和周边城市以及内华达州拉斯维加斯都有明显震感.1.2 数据选取本文中强震动记录数据来源于美国工程强震数据中心(CESMD). CESMD自7月4日至7月11日,包括主震及次大地震(7.1级和6.4级)在内,共记录到105次地震(≥MW 3),剔除震源深度小于等于0(可认为是地面塌陷或人为引起的震动)的数据后,得到94次地震. 各地震震中分布如图3所示. 图4所示为各次地震震级随时间变化曲线,从图中可看出,在6.4级的次大地震和7.1级的主震之间发生的地震,震级主要集中在MW 3 ~ 4.5,余震震级主要集中在MW 4 ~ 5.5. 图5为地震震级与相应的震源深度分布散点图,从图中可知,震源深度主要集中于0~13 km,均属于浅源地震(震源深度小于70 km),因此对地面建筑物带来的破坏较为严重.2 衰减关系6.4级和7.1级地震震级均较大且相差仅0.7级,在同一地震序列中能量较接近,因此以这两次地震动记录为例,进行地震动参数衰减关系的讨论. 在给定震级、震中距等条件下,利用地震动参数的衰减曲线能够对地震动进行参数估计,从而用这些估计参数作为结构抗震和安全评估计算的输入[12-15].鉴于这两次地震在加州的台站均有记录,本文分别选取了两次地震动中峰值加速度(PGA)和峰值速度(PGV)较大的地震动记录,7.1级地震中选取前49条水平地震动记录,6.4级地震中选取前34条水平地震动记录,并分别与已有模型对比. 该模型为俞言祥[16]基于美国西部NGA 强震数据库建立的基岩场地水平衰减关系模型,结果如图6所示.从图6中可看出,对于7.1级和6.4级地震的PGA和PGV的实际记录值,俞言祥模型曲线基本从这些散点中间穿过. 这些记录数据能够较好地反映出PGA和PGV随震中距的增大而衰减的趋势. 其中,6.4级地震对应的PGA、PGV的残差平方和(SSR)均小于7.1级地震,如表1所示. 这说明该模型对6.4级地震的地震动参数预测更加准确. 此外,值得注意的是,在7.1级地震中,震中距R=34.5 km的台站CLC,其PGA和PGV均远远高于俞言祥模型的预测值,造成此差异的原因有二:1)台站CLC虽然震中距较大,但是其断层距Rrup = 2.8 km远远小于其余台站,说明该台站与发震断层十分接近,地震波传播路径较短,因而地震动参数较大;2)台站CLC所处场地条件为C类(软基岩),不满足俞言祥模型的基岩场地条件,可能存在较大程度的场地放大效应,因而地震动参数较大.3 地震动三要素。
黑体:黑体概念是理解热辐射的基础。
黑体被定义为完全的吸收体和发射体。
它吸收和重新发射它所接收到的所有能量(没有反射)。
它的吸收率和发射率均为1。
也就是说,在任何温度下,对各种波长的电磁辐射能的吸收系数恒等于1的物体称为黑体。
灰体:太阳辐射:太阳是一个电磁辐射源,是遥感的主要能源。
作为一个炽热气体球的太阳.其中心温度15 x 106K,表而温度约6000 K。
太阳辐射的总功率为3.826 x lO26W,太阳表而的辐射出射度为6.284 x 10W m-2。
太阳的辐射波谱从X 射线一直延伸到无线电波,是个综合波谱。
单位时间内,垂直于太阳射线的单位面积上,所接收到的全部太阳辐射能。
其数值为1.36x 2护w.m-z。
此值实际为大气圈外太阳光的光谱辐照度在全波段范围内的积分值。
D是以日地平均距离为单位的日地之间的距离o B是太阳天顶角(与法线的夹角)。
当B为某地正午时分太阳天顶角时,.E为到达某地的最大地面辐照度Em。
二。
地面接收的太阳辐照度与太阳夭顶角有关。
在忽略大气损失的情况下,可近似认为地面辐照度E与cosB成正比。
之n}oosB 式中;£。
是太阳常数,一个描述太阳辐射能流密度的物理量。
地球辐射:地球辐射可分为短波辐射(0.3一2. Sam)及长波辐射(6}m以上)。
图1.7显示地球的短波辐射以地球表面对太阳的反射为主,地球自身的热辐射可忽略不计。
地球的长波辐射只考虑地表物体自身的热辐射,在这区域内太阳辐照的影响极小。
介子两者之间的中红外波段(2.5---6}em)太阳辐射和热辐射的影响均有,不能忽略。
对于地球的短波辐射的反射辐射而言,其辐射亮度与太阳辐照度及地物反射率有关。
黑体辐射:电磁波谱:电磁波谱是按电磁波在真空中的波长或频率来划分的。
它包括从无线电波、微波、红外光、可见光、紫外光、X射线、Y射线、宇宙射线等。
波谱区的划分没有明确的物理定义,因而界线并非严格、固定,是一种相互渗透的过渡关系。
外文资料与中文翻译Metrics of scale in remote sensing and GISMichael F Goodchild(National Center for Geographic Information and Analysis, Department of Geography, University of California, Santa Barbara)ABSTRACT: The term scale has many meanings, some of which survive the transition from analog to digital representations of information better than others. Specifically, the primary metric of scale in traditional cartography, the representative fraction, has no well-defined meaning for digital data. Spatial extent and spatial resolution are both meaningful for digital data, and their ratio, symbolized as US, is dimensionless. US appears confined in practice to a narrow range. The implications of this observation are explored in the context of Digital Earth, a vision for an integrated geographic information system. It is shown that despite the very large data volumes potentially involved, Digital Earth is nevertheless technically feasible with today‟s technology. KEYWORDS: Scale, Geographic Information System , Remote Sensing, Spatial ResolutionINTRODUCTION: Scale is a heavily overloaded term in English, with abundant definitions attributable to many different and often independent roots, such that meaning is strongly dependent on context. Its meanings in “the scales of justice” or “scales over ones eyes” have little connection to each other, or to its meaning in a discussion of remote sensing and GIS. But meaning is often ambiguous even in that latter context. For example, scale to a cartographer most likely relates to the representative fraction, or the scaling ratio between the real world and a map representation on a flat, two-dimensional surface such as paper, whereas scale to an environmental scientist likely relates either tospatial resolution (the representatio n‟s level of spatial detail) or to spatial extent (the representation‟s spatial coverage). As a result, a simple phrase like “large scale” can send quite the wrong message when communities and disciplines interact - to a cartographer it implies fine detail, whereas to an environmental scientist it implies coarse detail. A computer scientist might say that in this respect the two disciplines were not interoperable.In this paper I examine the current meanings of scale, with particular reference to the digital world, and the metrics associated with each meaning. The concern throughout is with spatial meanings, although temporal and spectral meanings are also important. I suggest that certain metrics survive the transition to digital technology better than others.The main purpose of this paper is to propose a dimensionless ratio of two such metrics that appears to have interesting and useful properties. I show how this ratio is relevant to a specific vision for the future of geographic information technologies termed Digital Earth. Finally, I discuss how scale might be defined in ways that are accessible to a much wider range of users than cartographers and environmental scientists.FOUR MEANINGS OF SCALE LEVEL OF SPATIAL DETAIL REPRESENTATIVE FRACTIONA paper map is an analog representation of geographic variation, rather than a digital representation. All features on the Earth‟s surface are scaled using an approximately uniform ratio known as the representative fraction (it is impossible to use a perfectly unif orm ratio because of the curvature of the Earth‟s surface). The power of the representative fraction stems from the many different properties that are related to it in mapping practice. First, paper maps impose an effective limit on the positional accuracy of features, because of instability in the material used to make maps, limited ability to control the location of the pen as the map is drawn, and many other practicalconsiderations. Because positional accuracy on the map is limited, effective positional accuracy on the ground is determined by the representative fraction. A typical (and comparatively generous) map accuracy standard is 0.5 mm, and thus positional accuracy is 0.5 mm divided by the representative fraction (eg, 12.5 m for a map at 1:25,000). Second, practical limits on the widths of lines and the sizes of symbols create a similar link between spatial resolution and representative fraction: it is difficult to show features much less than 0.5 mm across with adequate clarity. Finally, representative fraction serves as a surrogate for the features depicted on maps, in part because of this limit to spatial resolution, and in part because of the formal specifications adopted by mapping agencies, that are in turn related to spatial resolution. In summary, representative fraction characterizes many important properties of paper maps.In the digital world these multiple associations are not necessarily linked. Features can be represented as points or lines, so the physical limitations to the minimum sizes of symbols that are characteristic of paper maps no longer apply. For example, a database may contain some features associated with 1:25,000 map specifications, but not all; and may include representations of features smaller than 12.5 m on the ground. Positional accuracy is also no longer necessarily tied to representative fraction, since points can be located to any precision, up to the limits imposed by internal representations of numbers (eg, single precision is limited to roughly 7 significant digits, double precision to 15). Thus the three properties that were conveniently summarized by representative fraction - positional accuracy, spatial resolution, and feature content - are now potentially independent.Unfortunately this has led to a complex system of conventions in an effort to preserve representative fraction as a universal defining characteristic of digital databases. When such databases are created directly from paper maps, by digitizing or scanning, itis possible for all three properties to remain correlated. But in other cases the representative fraction cited for a digital database is the one implied by its positional accuracy (eg, a database has representative fraction 1: 12,000 because its positional accuracy is 6 m); and in other cases it is the feature content or spatial resolution that defines the conventional representative fraction (eg, a database has representative fraction 1:12,000 because features at least 6 m across are included). Moreover, these conventions are typically not understood by novice users - the general public, or children - who may consequently be very confused by the use of a fraction to characterize spatial data, despite its familiarity to specialists.SPATIAL EXTENTThe term scale is often used to refer to the extent or scope of a study or project, and spatial extent is an obvious metric. It can be defined in area measure, but for the purposes of this discussion a length measure is preferred, and the symbol L will be used. For a square project area it can be set to the width of the area, but for rectangular or oddly shaped project areas the square root of area provides a convenient metric. Spatial extent defines the total amount of information relevant to a project, which rises with the square of a length measure.PROCESS SCALEThe term process refers here to a computational model or representation of a landscape-modifying process, such as erosion or runoff. From a computational perspective,a process is a transformation that takes a landscape from its existing state to some new state, and in this sense processes are a subset of the entire range of transformations that can be applied to spatial data.Define a process as a mapping b (x ,2t )=f ( a (x ,1t )) where a is a vector of input fields, b is a vector of output fields, f is a function, t is time, 2t is later in time thant, and x denotes location. Processes vary according to how they modify the spatial 1characteristics of their inputs, and these are best expressed in terms of contributions tot) based only on the the spatial spectrum. For example, some processes determine b(x, ,2t), and thus have minimal effect on spatial spectra. inputs at the same location a(x,1Other processes produce outputs that are smoother than their inputs, through processes of averaging or convolution, and thus act as low-pass filters. Less commonly, processes produce outputs that are more rugged than their inputs, by sharpening rather than smoothing gradients, and thus act as high-pass filters.The scale of a process can be defined by examining the effects of spectral components on outputs. If some wavelength s exists such that components with wavelengths shorter than s have negligible influence on outputs, then the process is said to have a scale of s. It follows that if s is less than the spatial resolution S of the input data, the process will not be accurately modeled.While these conclusions have been expressed in terms of spectra, it is also possible to interpret them in terms of variograms and correlograms. A low-pass filter reduces variance over short distances, relative to variance over long distances. Thus the short-distance part of the variogram is lowered, and the short-distance part of the correlogram is increased. Similarly a high-pass filter increases variance over short distances relative to variance over long distances.L/S RATIOWhile scaling ratios make sense for analog representations, the representative fraction is clearly problematic for digital representations. But spatial resolution and spatial extent both appear to be meaningful in both analog and digital contexts, despite the problems with spatial resolution for vector data. Both Sand L have dimensions oflength, so their ratio is dimensionless. Dimensionless ratios often play a fundamental role in science (eg, the Reynolds number in hydrodynamics), so it is possible that L/S might play a fundamental role in geographic information science. In this section I examine some instances of the L/S ratio, and possible interpretations that provide support for this speculation.- Today‟s computing industry seems to have settled on a screen standard of order 1 megapel, or 1 million picture elements. The first PCs had much coarser resolutions (eg, the CGA standard of the early 198Os), but improvements in display technology led to a series of more and more detailed standards. Today, however, there is little evidence of pressure to improve resolution further, and the industry seems to be content with an L/S ratio of order 103. Similar ratios characterize the current digital camera industry, although professional systems can be found with ratios as high as 4,000.- Remote sensing instruments use a range of spatial resolutions, from the 1 m of IKONOS to the 1 km of AVHRR. Because a complete coverage of the Earth‟s surface at 1 m requires on the order of 1015 pixels, data are commonly handled in more manageable tiles, or approximately rectangular arrays of cells. For years, Landsat TM imagery has been tiled in arrays of approximately 3,000 cells x 3,000 cells, for an L/S ratio of 3,000.- The value of S for a paper map is determined by the technology of map-making, and techniques of symbolization, and a value of 0.5 mm is not atypical. A map sheet 1 m across thus achieves an L/S ratio of 2,000.- Finally, the human eye‟s S can be defined as the size of a retinal cell, and the typical eye has order 108 retinal cells, implying an L/S ratio of 10,000. Interestingly, then, the screen resolution that users find generally satisfactory corresponds approximately to the parameters of the human visual system; it is somewhat larger, but the computer screentypically fills only a part of the visual field.These examples suggest that L/S ratios of between 103 and 104 are found across a wide range of technologies and settings, including the human eye. Two alternative explanations immediately suggest themselves: the narrow range may be the result of technological and economic constraints, and thus may expand as technology advances and becomes cheaper; or it may be due to cognitive constraints, and thus is likely to persist despite technological change.This tension between technological, economic, and cognitive constraints is well illustrated by the case of paper maps, which evolved under what from today‟s perspective were severe technological and economic constraints. For example, there are limits to the stability of paper and to the kinds of markings that can be made by hand-held pens. The costs of printing drop dramatically with the number of copies printed, because of strong economies of scale in the printing process, so maps must satisfy many users to be economically feasible. Goodchild [2000]has elaborated on these arguments. At the same time, maps serve cognitive purposes, and must be designed to convey information as effectively as possible. Any aspect of map design and production can thus be given two alternative interpretations: one, that it results from technological and economic constraints, and the other, that it results from the satisfaction of cognitive objectives. If the former is true, then changes in technologymay lead to changes in design and production; but if the latter is true, changes in technology may have no impact.The persistent narrow range of L/S from paper maps to digital databases to the human eye suggests an interesting speculation: That cognitive, not technological or economic objectives, confine L/S to this range. From this perspective, L/S ratios of more than 104 have no additional cognitive value, while L/S ratios of less than 103 areperceived as too coarse for most purposes. If this speculation is true, it leads to some useful and general conclusions about the design of geographic information handling systems. In the next section I illustrate this by examining the concept of Digital Earth. For simplicity, the discussion centers on the log to base 10 of the L/S ratio, denoted by log L/S, and the speculation that its effective range is between 3 and 4.This speculation also suggests a simple explanation for the fact that scale is used to refer both to L and to S in environmental science, without hopelessly confusing the listener. At first sight it seems counter~ntuitive that the same term should be used for two independent properties. But if the value of log L/S is effectively fixed, then spatial resolution and extent are strongly correlated: a coarse spatial resolution implies a large extent, and a detailed spatial resolution implies a small extent. If so, then the same term is able to satisfy both needs.THE VISION OF DIGITAL EARTHThe term Digital Earth was coined in 1992 by U.S. Vice President Al Gore [Gore, 19921, but it was in a speech written for delivery in 1998 that Gore fully elaborated the concept (www.d~~Pl9980131 .html): “Imagine, for example, a young child going to a Digital Earth exhibit at a local museum. After donning a headmounted display, she sees Earth as it appears from space. Using a data glove, she zooms in, using higher and higher levels of resolution, to see continents, then regions, countries, cities, and finally individual houses, trees, and other natural and man-made objects. Having found an area of the planet she is interested in exploring, she takes the equivalent of a …magic carpet ride‟ through a 3- D visualization of the terrain.”This vision of Digital Earth (DE) is a sophisticated graphics system, linked to a comprehensive database containing representations of many classes of phenomena. It implies specialized hardware in the form of an immersive environment (a head-mounteddisplay), with software capable of rendering the Earth‟s surface at high speed, and from any perspective. Its spatial resolution ranges down to 1 m or finer. On the face of it, then, the vision suggests data requirements and bandwidths that are well beyond today‟s capabilities. If each pixel of a 1 m resolution representation of the Earth‟s surface was allocated an average of 1 byte then a total of 1 Pb of storage would be required; storage of multiple themes could push this total much higher. In order to zoom smoothly down to 1 m it would be necessary to store the data in a consistent data structure that could be accessed at many levels of resolution. Many data types are not obviously renderable (eg, health, demographic, and economic data), suggesting a need for extensive research on visual representation.The bandwidth requirements of the vision are perhaps the most daunting problem. To send 1 Pb of data at 1 Mb per second would take roughly a human life time, and over 12,000 years at 56 Kbps. Such requirements dwarf those of speech and even full-motion video. But these calculations assume that the DE user would want to see the entire Earth at Im resolution. The previ ous analysis of log L/S suggested that for cognitive (and possibly technological and economic) reasons user requirements rarely stray outside the range of 3 to 4, whereas a full Earth at 1 m resolution implies a log L/S of approximately 7. A log L/S of 3 suggests that a user interested in the entire Earth would be satisfied with 10 km resolution; a user interested in California might expect 1 km resolution; and a user interested in Santa Barbara County might expect 100 m resolution. Moreover, these resolutions need apply only to the center of the current field of view.On this basis the bandwidth requirements of DE become much more manageable. Assuming an average of 1 byte per pixel, a megapel image requires order 107 bps if refreshed once per second. Every one-unit reduction in log L/S results in two orders of magnitude reduction in bandwidth requirements. Thus a Tl connection seems sufficientto support DE, based on reasonable expectations about compression, and reasonable refresh rates. On this basis DE appears to be feasible with today‟s communication technology.CONCLUDING COMMENTSI have argued that scale has many meanings, only some of which are well defined for digital data, and therefore useful in the digital world in which we increasingly find ourselves. The practice of establishing conventions which allow the measures of an earlier technology - the paper map - to survive in the digital world is appropriate for specialists, but is likely to make it impossible for non-specialists to identify their needs. Instead, I suggest that two measures, identified here as the large measure L and the small measure S, be used to characterize the scale properties of geographic data.The vector-based representations do not suggest simple bases for defining 5, because their spatial resolutions are either variable or arbitrary. On the other hand spatial variat;on in S makes good sense in many situations. In social applications, it appears that the processes that characterize human behavior are capable of operating at different scales, depending on whether people act in the intensive pedestrian-oriented spaces of the inner city or the extensive car-oriented spaces of the suburbs. In environmental applications, variation in apparent spatial resolution may be a logical sampling response to a phenomenon that is known to have more rapid variation in some areas than others; from a geostatistical perspective this might suggest a non-stationary variogram or correlogram (for examples of non-statjonary geostatistical analysis see Atkinson [2001]). This may be one factor in the spatial distribution of weather observation networks (though others, such as uneven accessibility, and uneven need for information are also clearly important).The primary purpose of this paper has been to offer a speculation on the significance of the dimensionless ratio L/S. The ratio is the major determinant of datavolume, and consequently processing speed, in digital systems. It also has cognitive significance because it can be defined for the human visual system. I suggest that there are few reasons in practice why log L/S should fall outside the range 3 - 4, and that this provides an important basis for designing systems for handling geographic data. Digital Earth was introduced as one such system. A constrained ratio also implies that L and S are strongly correlated in practice, as suggested by the common use of the same term scale to refer to both.ACKNOWLEDGMENTThe Alexandria Digital Library and its Alexandria Digital Earth Prototype, the source of much of the inspiration for this paper, are supported by the U.S. National Science Foundation.REFERENCESAtkinson, P.M., 2001. Geographical information science: Geocomputation and nonstationarity. Progress in Physical Geography 25(l): 111-122.Goodchild, M F 2000 Communicating geographic information in a digital age. Annals of the Association of American Geographers 90(2): 344-355.Goodchild, M.F. & J. Proctor, 1997. Scale in a digital geographic world. Geographical and Environmental Modelling l(1): 5-23.Gore, A., 1992. Earth in the Balance: Ecology and the Human Spirit. Houghton Mifflin, Boston, 407~~.Lam, N-S & D. Quattrochi, 1992. On the issues of scale, resolution, and fractal analysis in the mapping sciences. Professional Geographer 44(l): 88-98.Quattrochi D.A & M.F. Goodchild (Eds), 1997. Scale in Remote Sensing and GIS.Lewis Publishers, Boca Raton, 406~~.中文翻译:在遥感和地理信息系统的规模度量迈克尔·F古德柴尔德(美国国家地理信息和分析中心,加州大学圣巴巴拉分校地理系)摘要:长期的规模有多种含义,其中一些生存了从模拟到数字表示的信息比别人更好的过渡。
听觉的掩蔽效应及其应用摘要:听觉掩蔽现象是指一种声音对听觉系统感受另一种声音的影响,其在自然界中普遍存在。
听觉掩蔽现象不仅在人和动物对声音的感知和定位中起着重要的作用,其也越来越多地被应用于实际生活和临床治疗。
关键词:掩蔽效应;应用中图分类号:TN912.3 文献标识码:A人和动物都生活在充满声音的环境里,人类依靠声音进行交流,很多动物则靠声音进行通讯,寻找食物和配偶乃至感知外界环境。
有些有生物学意义的声音在自然界中并非孤立存在,它们之间相互作用相互影响会形成听觉的掩蔽现象。
在听觉研究中,掩蔽是指一种声音对听觉系统感受另一种声音的影响。
早期听觉心理物理学测试显示,当从不同位置呈现的两个声信号间隔时间足够短时,受试者将两个声信号辨知为一个融合声,且只能确认前导声的位置,即第一个声音(掩蔽声)对滞后声(探测声)存在前掩蔽效应;同时滞后声对前导声的感知也存在着一定程度的后掩蔽效应。
一般而言,掩蔽作用会削弱听觉系统对声音的辨别和感知,引起对探测声的反应下降,感受阈值升高,对探测声探测能力降低;而在有些情况下掩蔽声也可易化神经元对探测声的反应使其兴奋性增加。
自然界中存在的听觉掩蔽现象非常普遍,其在人和动物对声音的感知和定位中起着非常重要的作用。
随着人们对听觉掩蔽现象的了解,其也越来越多被应用于实际生活和临床有关疾病的治疗。
1 掩蔽现象在声音感知和定位中的作用1.1 声源定位当声音产生于一个回响的环境时,会向不同方向传播,并且随后从附近的表面反射回来,第一个声音和反射回来的声音之间会相互影响,从而产生掩蔽效应。
听觉系统因而面临着要分析发出去的第一个声音和反馈声之间相互作用的问题,并根据反馈声的不同特性进行声音的感知和定位,尽管这是一堆看似乱糟糟的信息,但我们仍然能对这些声源进行定位并能相当准确的分辨出其中的含义。
声源定位的能力相当重要,确定物体的方向有助于我们将注意转向或回避某种声源。
对于某些动物,尤其是声纳动物如蝙蝠等,声源定位有助于寻找捕猎对象或回避敌害,此为生存的必不可少的能力。
---------------------------------------------------------------最新资料推荐------------------------------------------------------心脏超声中英文对照词汇一次谐波共振 First harmonic response [hɑː’mɒnɪk] 二尖瓣Mitral valve, MV [‘maɪtrəl] 二尖瓣口 Mitral valve orifice, MVO [‘ɑrɪfɪs] 二尖瓣后瓣 Posterior mitral valve, PMV [pɒ’st ɪərɪə] 二尖瓣血流 Mistral inflow 二尖瓣前瓣 Anterior mitral valve, AMV [n’tɪərɪə] 二尖瓣裂 Mitral valve cleft, MVCLF [kleft] 二次谐波共振 Second harmonic response 二次谐波多普勒组织成像 Second Doppler tissue imaging, H-DTI 二次谐波成像Second harmonic imaging 人工瓣膜血栓 Prosthetic heart valve thrombus 三心房心 Cor triatriatum 三尖瓣 Tricuspid valve, TV 三尖瓣关闭不全Tricuspid valve insufficiency 三尖瓣闭锁Tricuspid atresia 三尖瓣前瓣 Anterior tricuspid valve, ATV 三尖瓣狭窄Tricuspid valve stenosis 三尖瓣疾病Tricuspid disease 三尖瓣隔瓣 Septal tricuspid valve, STV 三维超声心动图Three-dimensional echocardiography 大动脉转位Transposition of the great arteries 川崎病 Kawasakis disease 中场 Middle filed 分辨率 resolution 反射 Reflection 心内血栓 Intracardiac thrombus 心内膜垫缺损 endocardial cushion defect 心包疾病 Pericardial effusion 心包积液 Pericardial effusion 心外膜冠状动脉 Epicardial coronary artery 心尖多孔瑞士奶酪样室间隔缺损 Digital multiple swiss cheese septal1 / 5defect 心肌内冠状动脉 Intramyocardial coronary artery 心肌对比超声心动图 Myocardial contrast echocardiography, MCE 心肌梗塞 Myocardial infarction 心肌梗塞并发症 Complications of myocardial infarction 心房内血栓 Atria thrombus 心房黏液瘤Atria myxoma, MYX 心室内附壁血栓Intraventricular mural thrombus 心绞痛Angina pectoris 心脏声学造影Cardial acoustic contrast 心脏肿瘤Cardial tumor 心脏移植Heart transplantation 主动脉二叶瓣 Bicuspid aortic valve 主动脉瓣口Aortic valve orifice, AVO 主动脉瓣狭窄Aortic valves stenosis 主肺动脉 Main pulmonary artery, MPA 主瓣 Main lobe 功率谱 Power spectrum 右心房 Right atrium, RA 右心室 Right ventricle, RV 右心室收缩时间间期 Right ventricle systolic time intervals 右心室收缩前间期 Right ventricle pre-ejection period, RVPEP 右心室射血时间Right ventricle ejection time ,RVET 右冠状动脉起源于肺动脉 Anomalous origin of right coronary artery from pulmonary artery 右室双出口Double-outlet right ventricle 右心室双腔心 Double chambered right ventricle 右心室流出道 right ventricle outflow 对比造影谐波成像 Contrast agent harmonic imaging, CAHI 对比超声心动图学 Contrast echocardiography, CE 对数压缩 Logarithmic compensation 尼奎斯特频率极限 Nyquist frequency limit 左心耳 Left atrium apendge, LAA 左心房 Left atrium 左心左心室长---------------------------------------------------------------最新资料推荐------------------------------------------------------ 轴切面 Left ventricle, LV 左心室发育不全综合征 Hypoplastic left heart syndrome 左心室收缩末期内径 Left ventricle end systolic dimension, LVEDD 左心室流出道梗阻 Left ventricle outflow obstruction 左心室舒张末期内径 Left ventricle end diastolic dimension, LVSDD 左冠状动脉起源于肺动脉 Anomalous origin of left coronary artery from pulmonary artery 平行扫描 Parallel scanning 永存动脉干 Persistent arterious 电子相控阵扇型扫面 Phased array sector scan 皮肤黏膜淋巴结综合征Mucocutaneous lymph node syndrome, MCLS 节制束 Moderator band 伪像Artifacts 伪影处理技术Pseudo-color processing technique 先天性肺动脉口狭窄Congenital pulmonary artery fistula 先天性冠状动脉瘘 Congenital coronary artery fistula 共振Resonant 共振频率Resonant frenquency 压力半降时间Pressure half-time, PHT 回声失落 Echo drop-out 回声增强效应Effect of echo enhancement 团注 Bolus 多平面经食道超声心动图Multiplane transesophageal echocardiography 多点选通式多普勒Multigate Doppler 多普勒方程 Doppler equation 多普勒组织 M 型模式 Doppler tissue m-mode, DT-M-MODE 多普勒组织加速度图Doppler tissue acceleration, DAT 多普勒组织成像Doppler tissue imaging, DTI 多普勒组织脉冲频谱 Doppler tissue pulsed wave mode, DT-DTE 多普勒组织能量图 Doppler tissue energy, DTE3 / 5多普勒组织速度图 Doppler tissue velocity, DTV 多普勒效应Doppler effect 多普勒超声心动图 Doppler echocardiography 多普勒频移Doppler shift 导航装置Homing deveces 导管超声Catheter ultrasound 机械扇型扫描 Mechanical sector scan 纤维瘤Fibroma 自由扫查Free-hand scanning 自动边缘检测Automatic border detection 自然组织谐波成像 Native tissue harmonic imaging 色彩倒错 Color aliasing 血栓 Thrombus 血流彩色成像 Color flow mapping 血管肉瘤 Angiosarocama 血管腔内超声成像Intravascular ultrasound imaging 负荷超声心动图Stress echocardiography 体元模型 Voxel model 声束形成 Bean forming 声阻抗Acoustic impedance 声学定量Acoustic quantification 声学速度Acoustic velocity 声强Acoustic intensity 层流 Laminar flow 希阿利网 Chiari netok 快速富里叶变换Fast fourier transform 折射Refraction 时间分辨率Temporal resolution 时间增益补偿 Time gain compensation 时域法 Time domain method 纵向分辨率 Longitudinal resolution 纵波 Longitudinal wave 肛管超声 Anal endosonography 近场 Near filed 进入曲线 Wash in curves 远场 Far filed 连续式多普勒Continuous wave Doppler, CW Doppler 连续注射Continuous injection 乳头肌Papillary muscle, PM 单脉冲删除Single pulse concellation 取样容积 Sample volume 图像分辨率 Image resolution 实时频谱分析 Real-time spectral analysis 房间隔缺---------------------------------------------------------------最新资料推荐------------------------------------------------------ 损 Atrial septal defect, ASD 房间隔脂肪瘤样肥厚 Lipomatous hypertrophy of the atrial septum 房间隔瘤Atrial septal aneurysm 欧氏瓣Eustachian 空壳Hollow core 空间分辨率Spatial resolution 组织多普勒成像技术 Tissue Doppler imagine 组织多普勒超声心动图 Tissue Doppler echocardiography 经心腔内超声心动图 Intracardiac echocardiography 经阴道彩色多普勒超声 Trans-vaginal color Doppler ultrasound 经食道超声心动图Transesophageal echocardiography 限制性室间隔缺损Restrictive ventricular septal defect 非致密性心室心肌二维超声心动图Non-compaction of ventricular myocardium two dimensional echocardiography 肺动脉 Pulmonary artery 肺动脉狭窄肺动脉高压肺动脉瓣肺静脉异位引流顶端侧方扫描式顶端旋转扫描式临床基础冠心病冠状动脉内超声显像冠状动脉异常冠状动脉血流储备冠状动脉起源异常冠状静脉窦厚度分辨率室上嵴室间隔缺损室间隔膜部瘤界面相干对比造影成像相干图像形成技术类脂背向散射积分背景噪音脉冲反相谐波成像脉冲式多普勒脉冲重复频率衍射重叠房室瓣扇形扫描振幅捆扎型纤维蛋白分子旁瓣效应浦肯野纤维瘤消除曲线涡流特定定点造影剂留间隔器缺血性预适应胸骨旁短轴切面能量对比成像脂肪瘤5 / 5。
飞利浦1.5T MR仪参数选项校译一、Initial(基础选项卡)本选项卡中的所有参数是在之后的诸选项卡中提取组成,故不详述。
二、Geometry (几何选项卡)1、coil selection(线圈选择)1.1 element selection 线圈单元选择。
如头颈线圈有head 和neck 两个单元选择。
和coil selection 一样是序列默认的。
1.2 Connection 连接通道。
有con-A和con-B两个通道供选择。
2、Homogeneity Correction 是一种为消除表面线圈的近线圈伪影的图像过滤技术,使距离线圈不同距离的组织信号尽可能接近,但准确性和效果较低,并不常用。
3、CLEAR(Contrast Level AppenRance )恒定水平呈现。
是最常用的一种为消除表面线圈的近线圈伪影图像后处理技术,3.1 bodytuned 指除了用表面线圈的空间敏感度信息之外,还应用体线圈对比校正来消除近线圈伪影。
注:应用表面线圈且扫描层面与表面线圈垂直时,均应该选用上述技术之一来消除近线圈伪影。
4、Fold-over suppression 卷轴抑制5、SENSE(SENSitivity Endoring)磁敏感编码。
本质是一种通过参考扫描获得相控阵线圈的空间敏感度信息进行去除卷轴伪影的数学算法。
6、stacks 扫描框6.1 type方式。
有parallel(平行的)和radial(放射状的)两个选项。
6.2 Sliceorientation 层面定位。
6.3 Fold-overdirection 相位编码方向。
6.4 Fatshift direction 化学位移方向。
因为化学位移伪影的方向是和频率编码方向一致的,所以系统的备选项中,方向总是和相位编码方向垂直。
7、Minimum number of packages 最小采集组数。
它主要的意义体现在颅脑T2WI-FLAIR序列,为避免成像层面以外的脑脊液流入成像层面而影响抑水效果,所以激发层面数要大于成像层面数。
ManualContentsIntroductionFeature OverviewGUI OverviewFAST and Detailed ViewLearning and Automatic Parametrisation Fine Tune Your SoundStyle ModulesOutput MonitoringGlobal Control SectionSettings 3 4 5 6 7 8 9 11 12 13IntroductionFAST Limiter is an Artificial Intelligence (AI) powered true peak limiter plug-in that helps to add the right finishing touches to your audio tracks.Like all plug-ins of the FAST family, FAST Limiter has been designed with a simple goal in mind: Get great results, FAST!Feature OverviewAI Powered LimitingFAST Limiter uses AI technology to find the right limiter parameters for your audio material within seconds in order to get your tracks ready for publishing.FAST View and Detailed ViewThe user interface has two view modes: FAST View provides the customised controls you need to keep in the creative flow while Detailed View provides deeper control over parameters.Flavour ButtonsThree buttons allow you to choose between a Modern, Neutral or Aggressive limiting style.Style ModulesFour different Style Modules offer an easy way to tweak the spectral and temporal characteristics of your audio material at the touch of a button.ProfilesDifferent profiles allow you to tell FAST Limiter what kind of genre the plug-in is dealing with. This ensures a good adaption of the processing to your audio material.1122345Limiter DisplayLearn SectionGUI OverviewLearn ButtonStart the learning process.Waveform DisplayMonitor the Limiter’s impact on the signal.GainControl the input gain.Gain ReductionMonitor the applied gain reduction.MeterMonitor your input and output level and the applied gain reduction.Style ModulesTweak the spectral and temporal characteristics of your audio material.FAST/DetailedSwitch GUI between FAST View and Detailed View.Flavour Buttons Profile DropdownSelect a genre that bestmatches your audio material or choose a reference track.Q uality Indica tors Check the publishing quality of your track. Hover on the indicator icons to show some hints regarding the final publishing checks.1123323444566FAST and Detailed ViewYou can easily switch between FAST and Detailed View by clicking on the two buttons in the upper left corner of the interactive equaliser display.FAST View is designed to give you optimised controls to keep in the creative zone. In this mode, you only see the controls you need, so you can make quick tweaks and keep moving throughout your music-making process.This is the default View when opening the plug-in.Detailed View is designed for users who want to have maximum freedom in making changes as they see fit. All parameters can be freely modified and allow you to fine tune the results to your personal taste.FAST ViewDetailed ViewThe user interface of FAST Limiter has two view modes: FAST View provides the customised controls you need, based on the content of your material. Detailed View provides deeper control over parameters, to adjust settings to your own taste.Done! Once learning is completed, FAST L imiter sets well-balanced limiter parameters and activates the Style Modules. You can now see the gained input signal (light grey), the output signal (green) and the gain reductioncurve (red) in the interactive Waveform Display .If you want FAST Limiter to learn from a different section ofyour input signal, you can simply start the audio playback from there and click the Relearn button. Please note that you don’t have to click the Relearn button when switching between Flavours or Genre Profiles.Learning and Automatic ParametrisationThe heart of FAST Limiter is its ability to automatically find the most suitable limiter parameters for your signal. There-fore, choosing a profile and starting the learning process will typically be the first thing you want to do when working with the plug-in.The learning process will not only automatically set all limiter parameters, it will also set and activate the Style Modules that allow to tweak the spectral and temporal characteristics of your audio material.Choose a Genre Profile that best matches your inputsignal. If you don’t find a suitable profile, simply select “Universal”. You can also set a music file from your hard drive as a target by clicking ‘Reference Track’ in the dropdown menu. Start the playback in your DAW. Make sure to select a rela-tively loud segment of your track (e.g. the refrain).Press the L earn button to start the learning process.Now, a progress bars inside the L earn button and a learning animation inside the Style Modules indicate the progress of the learning process.112233Fine Tune Your SoundWorking in FAST ViewAdapt the GainYou can use the Gain handle to set the amount of input gain. Moving the handle up will raise the level of the input signal and more peaks will be limited. This leads to an increased loudness and reduces the dynamics of the output signal. You will instantly see the impact on the waveform of the limited output signal in the Waveform Display.Choose a FlavourYou can use three Flavour buttons to quickly change the character of your limited sound. Once you settled on a Flavour you can further fine-tune the results using the Gain handle and the basic Style Modules in FAST View or the additional parameters available in Detailed View. Read the section Style Modules on page 10 for more details.Change ProfilesYou can always change the selected Genre Profile , without needing to restart the learning process. Please note that manually made adaptions to parameters will not be adopted. Changing your profile will reset all parameters to their default value.Level MatchEnable to level-match the processed output with the dry input signal for an accurate A/B comparison. This helps to objectively compare the sound of the original signal and the processed signal without being (positively) biased by the louder level of the gained output signal.1 23 4LimitSet the maximum signal level that is allowed to pass through the limiter. This is a hard limit for the gained signal (all values larger than this value are limited) and represents the highest possible true peak level of the output signal.SpeedSet the speed for the limiter. The speed parame-ter controls the temporal characteristics (attack & release) of the limiter.fastThe gain reduction quickly returns to zero after the signal was limited. This setting preserves more transients and leads to a louder signal, but may cause audible distortion for heavily limited signals. slowThe gain reduction returns slowly to zero after the signal was limited. This setting leads to smooth limiting results, but may not be perfectly suited for highly tran-sient signals.autoIn auto mode, FAST L imiter will automatically adapt to the characteristics of the input signal. Auto mode ensures that the limiting process does not create audible distortion even for more extreme gain settings. This mode can be used for any type of signal.Working in Detailed View12Style ModulesBassEnhance the low end of your signal. This can be helpful ifthe bass feels muddy or weak.Set a cut off frequency to only apply the bass enhance-ment below this frequency.ResonancesImprove the spectral balance and tame resonances. This module is great for giving your track a final, subtle polish.Set the frequency resolution for the resonance processing.SaturationAdd saturation and increase the apparent loudness of your track without increasing peak level. This effect feels a bit like inflating the signal.Choose a character for the saturation effect.TransientsTweak transient components and preserve the punch of your signal.Select the sensitivity for the transient tweaking effect.The four different Style Modules offer an easy way to tweak the spectral and temporal characteristics of your audiomaterial. Each module comes with a main parameter to control its overall impact , an additional parameter to tweak theunderlying processing ( , only available in Detailed View) and a visualisation showing the current effect on the signal .All modules can be enabled and disabled or pinned to remain expanded .12233445LoudnessIntegrated loudness of the observed output signal in LUFS. Loudness RangeLoudness variation between the loudest and quietest sections of your track.Max. PeakMaximum true peak value of the observed output signal. Pause / Play IconStart or pause the loudness measurement.Restart IconRestart the measurement of loudness and peak. Restarting the measurement can be helpful if you have made significant changes to your mix. For the most precise results on a whole track, restart the measurement at the beginning of your track and let it run through until the end.Quality IndicatorsFAST Limiter constantly monitors the loudness, and peak level of your track. The large quality indicators in FAST View and the small quality indicators next to the actual measurement values in Detailed View indicate if your track is ready for publishing.This value is looking good for publishing. Every-thing is good to go!There could be a potential issue with your track. Hover over the icon to learn more about the poten-tial issue.FAST L imiter has not yet collected enough infor-mation about your track. Continue the playback and wait until the measurement becomes valid.Per default, FAST L imiter assumes that you are publishing your tracks to a streaming platform like Spotify, YouTube or Apple Music and there-fore uses a reference loudness level of -14 LUFS.If you want to select an alternative publishing target (CD or broadcasting), you can switch the reference loudness on the settings page (see 13).Output Monitoring123456Global Control SectionBypassUndo/RedoSave and Load PresetsTo save a preset (all parameter values), click the Save button in the Control Section. To load a saved preset, choose the respective preset name from the preset dropdown.If you want to delete a preset or change its name, please go to the preset folder with your local file explorer. You can also easily share your presets among different workstations. All presets are saved with the file exten-sion *.spr to the following folders:OSX: ~/Library/Audio/Presets/Focusrite/FASTLimiter Win: Documents\Focusrite\FASTLimiter\Presets\11223344SettingsClick the small cog wheel to access the settings page of FAST Limiter (e.g. to restart the Guide Tour or to check your subscription status).Settings11442233Show Detailed View on Start-up (Global Setting)If you prefer working in Detailed View, enable this setting. FAST Limiter will now start up in Detailed View by default.Show Tooltips (Global Setting)Disable this option, if you want to hide tooltips.Use OpenGL (Global Setting)If you are experiencing graphic problems (e.g. render-ing problems), you can try to disable the OpenGL graphics acceleration.Loudness TargetSelect a publishing target (Streaming, CD, Broadcast-ing). This target will be used for the quality indicators.Take Guided TourClick this button to restart the Guided Tour. Please note that all parameters will be reset to their default values when a new tour is started!551122License InformationThis section shows the license information for your plug-in.Help CenterVisit the Help Center to e.g. manage your subscriptions or download new plug-ins and the latest updates.License Management。
地物判绘样例英文English:In geographical information systems (GIS), land cover classification plays a crucial role in mapping and analyzing the Earth's surface. Land cover classification refers to the process of categorizing the different types of land surfaces such as forests, water bodies, urban areas, agricultural fields, etc., based on their spectral, spatial, and temporal characteristics. This classification is typically done using remotely sensed data acquired from satellites or aerial imagery. The process involves various steps including image preprocessing, feature extraction, and classification algorithm application. Image preprocessing involves tasks like radiometric and geometric correction to enhance the quality of the images. Feature extraction aims to identify relevant information from the images, such as texture, color, and shape, which are then used as input variables for the classification algorithm. Classification algorithms include supervised, unsupervised, and hybrid techniques, each with its strengths and weaknesses. Supervised classification requires training samples for each land cover class, while unsupervised classification clusters pixels based on their spectral properties without priorknowledge. Hybrid techniques combine aspects of both supervised and unsupervised methods for improved accuracy. Once classified, the results are validated using ground truth data to assess the accuracy of the classification. This process helps in generating land cover maps that are valuable for various applications including environmental monitoring, urban planning, natural resource management, and disaster response.中文翻译:在地理信息系统(GIS)中,地物覆盖分类在地表地图制作和分析中起着至关重要的作用。
a rXiv:as tr o-ph/15110v17May21Temporal and Spectral Characteristics of Short Bursts from the Soft Gamma Repeaters 1806-20and 1900+14Ersin G¨o ˘g ¨u ¸s 1,2,Chryssa Kouveliotou 2,3,Peter M.Woods 2,3,Christopher Thompson 4,Robert C.Duncan 5,Michael S.Briggs 1,2Ersin.Gogus@ ABSTRACT We study the temporal and coarse spectral properties of 268bursts from SGR 1806−20and 679bursts from SGR 1900+14,all observed with the Rossi X-Ray Timing Explorer/Proportional Counter Array.Hardness ratios and temporal parameters,such as T 90durations and τ90emission times are determined for these bursts.We find a lognormal distribution of burst durations,ranging over more than two orders of magnitude:T 90∼10−2to 1s,with a peak at ∼0.1s.The burst light curves tend to be asymmetrical,with more than half of all events showing rise times t r <0.3T 90.We find that there exists a correlation between the duration and fluence of bursts from both sources.We also find a significant anti-correlation between hardness ratio and fluence for SGR 1806−20bursts and a marginal anti-correlation for SGR 1900+14events.Finally,we discuss possible physical implications of these results within the framework of the magnetar model.Subject headings:X-rays:bursts –gamma rays:bursts –stars:individual (SGR 1806−20)–stars:individual (SGR 1900+14)1.IntroductionSoft gamma repeaters (SGRs)are a small class of objects that are characterized by brief and very intense bursts of soft gamma-rays and hard X-rays.They are distinguishedfrom classical gamma-ray bursts(GRBs)by their repeated periods intense activity,during which dozens of bursts with energies approaching1041ergs are recorded.SGR bursts have significantly softer spectra than classical GRBs;the former are being wellfit by an optically-thin thermal bremsstrahlung model with temperatures kT=20−40keV.Two SGRs(0526-66and1900+14)have emitted one giantflare each:events that are much more energetic (E∼1044−45erg)and contain a very hard spectral component within thefirst∼1s(Mazets et al.1979;Cline et al.1981;Hurley et al.1999;Mazets et al.1999;Feroci et al.1999). For a review of the burst and persistent emission properties of SGRs,see Hurley(2000).In1992,it was suggested that SGRs are strongly magnetized(B 1014G)neutron stars, or magnetars(Duncan&Thompson1992,see also Pac`z ynski1992).This model suggests crustquakes as a plausible trigger for the short SGR bursts(as well as the giantflares):a sudden fracture of the rigid neutron star crust,driven by the build-up of crustal stresses as the strong magneticfield gradually diffuses through the dense stellar matter(Thompson& Duncan1995[hereafter TD95];1996).The motion of the crust shears and twists the external magneticfield,and in the process releases both elastic and magnetic energy.Cheng et al.(1996)studied a set of111SGR1806−20bursts(detected with the In-ternational Cometary Explorer,ICE)and determined that some of their properties,such as size and cumulative waiting-time distributions are similar to those of earthquakes.Recently, G¨o˘g¨u¸s et al.(2000)confirmed these similarities for SGR1806−20using a larger sample of 401bursts(111detected with the Burst and Transient Source Experiment,BATSE,aboard the Compton Gamma Ray Observatory,CGRO;290with the the Proportional Counter Ar-ray,PCA,on the Rossi X-ray Timing Explorer,RXTE).The same similarities were found for SGR1900+14when we analyzed a sample of1024bursts(187observed with CGRO/BATSE; 837with the RXTE/PCA)(G¨o˘g¨u¸s et al.1999).Furthermore,we reported evidence for a cor-relation between duration andfluence for the SGR1900+14bursts,similar to the correlation seen between the duration of strong ground motion at short distances from an earthquake region and the seismic moment(∝energy)of an earthquake(Lay&Wallace1995).This similarity strongly suggests that SGR bursts,like earthquakes,may be manifestations of self-organized critical systems(Bak,Tang&Wiesenfeld1988),lending support to the hypothesis that SGR bursts involve the release of some form of stored potential energy.Statistical studies such as the above provide important clues not only about the mecha-nism by which energy is injected during an SGR burst,but also the mechanism by which it is radiated.For example,in the magnetar model the stored potential energy may be predom-inantly magnetic,and the electrical currents induced by crustal fractures may be strongly localized over small patches of the star’s surface,somewhat as they are in solarflares.In such a situation,the energy density in the non-potential magneticfield is high enough thatexcitation of high frequency Alfv´e n motions leads to rapid damping and the creation of a trappedfireball(through a cascade to high wavenumber:Thompson&Blaes1998).By contrast,if energy is injected more globally,then an approximate balance between injection and radiation will occur in the bursts of luminosity less than∼1042erg s−1(Thompson et al.2000).Here,we investigate detailed temporal characteristics of a large subset of SGR1806−20 and SGR1900+14bursts observed with the RXTE/PCA during their burst active periods in1996and1998,respectively.We apply to the SGR bursts some temporal analysis methods that were originally developed for the study of GRBs.We also study the spectral variations of the bursts as a function of the burstfluence and duration.In§2,we briefly review the RXTE/PCA observations.Section3describes the data analysis techniques and our results are discussed in§ 4.2.ObservationsThe PCA instrument(Jahoda et al.1996)consists offive Xe proportional counter units (PCUs)sensitive to energies between2−60keV with18%energy resolution at6keV.The PCA has a total effective area of∼6700cm2.SGR1806−20:The RXTE/PCA observations of SGR1806−20were performed during a burst active period of the source in1996(between November5and18)for a total effective exposure time ing the burst search procedure described in G¨o˘g¨u¸s et al.(2000),we have identified290bursts from the source.The number of integrated counts(2−60keV)for these bursts range ing the count-to-energy conversion factor given in G¨o˘g¨u¸s et al.(2000)this corresponds to burstfluences between 1.2×10−10and1.9×10−7ergs cm−2(E>25keV).Assuming isotropic burst emission,the corresponding energy range is3.0×1036–4.9×1039ergs,for a distance to SGR1806−20of 14.5kpc(Corbel et al.1997).To facilitate our analysis,we chose events that were clustered together during two very active epochs of the source,resulting in268events recorded on5 (MJD50392)and18(MJD50405)November1996.SGR1900+14:RXTE observations of SGR1900+14took place between1998June 2and December21,for a total effective exposure time ing the same burst search algorithm as before,we identified a total of837bursts from the source with integrated counts ranging between22and60550.Thefluences of these bursts range from1.2×10−10 to3.3×10−7ergs cm−2(E>25keV)corresponding to an energy range of7×1035–2×1039 ergs(assuming isotropic emission at7kpc[Vasisht et al.1994]).Similarly,we selected679events which occurred during a very active period of the source between1998August29 (MJD51054)and September2(MJD51058).3.Data Analysis and Results3.1.Duration(T90)estimatesOriginally defined for cosmic GRBs,the T90duration of a burst is the time during which90%of the total(background-subtracted)burst counts have been accumulated since the burst trigger(Kouveliotou et al.1993).We calculated the T90duration of SGR bursts using event-mode PCA data(2-60keV)with1/1024s time resolution.For each burst we collected the cumulative counts for an8s continuous stretch of data starting4s before its peak,t p.We thenfit the cumulative count distribution between two user-selected background intervals6with afirst order polynomial plus a step function(for the burst)and subtracted the background counts.By using afirst order polynomial tofit the cumulative counts,we assume the PCA background remainsflat over each8s segment.The resulting height of the step function gives the total burst counts.Figure1depicts the steps of the T90estimate procedure for one of the SGR1900+14bursts.As the selected bursts occurred during extremely active periods(for both sources), there were quite a number of cases where bursts were clustered very close together.During these active episodes,many crustal sites on the neutron star may be active,releasing stored potential energy in the form of bursts with a large variety of time profiles.Hence,it is important to distinguish single pulse events from events with multiple peaks(which may involve multiple fracture sites).To do this,we applied arbritrary but consistent criteria to our data.We classified an event as multi-peaked if the count rate at any local minimum (in7ms time bins)is less than half the maximum value attained subsequently in the burst (see Figure2,middle plots).Otherwise the event was classified as single-peaked(Figure2, top plots).When the count rate dropped to the noise level between peaks,the event was classified as a single,multi-peaked burst if and only if the time between peaks was less than a quarter of the neutron star rotation period(1.3s for SGR1900+14,1.9s for SGR1806-20; see Figure2,bottom plots).We found that262of the679bursts from SGR1900+14,and 113of the268SGR1806-20bursts were multi-peaked by these criteria.Note that the total counts per burst(or”countfluences”)were found to span nearly the same range in the twoclasses of bursts,in both sources.This suggests that the light curve peak structure does not correlate strongly with total energy.In the context of cosmic GRBs,the value of T90can be systematically underestimated in faint events due to low signal-to-noise(Koshut et al.1996).We investigated this effect in our SGR analysis using extensive numerical simulations.We created three time profiles based upon the observed time profiles;a single two-sided Gaussian(with right-width twice the left width),two two-sided Gaussians whose peaks are separated by0.5s and the peak rate of the second pulse is0.6of thefirst one,and two two-sided Gaussians with peak separation of1s.For each profile,we varied the peak rates(eight values between2600counts s−1 and55000counts s−1)and the width of the Gaussians(eight values from12ms to120 ms).We determined the T90duration for each combination before adding noise.Then for each combination,we generated800realizations including Poisson noise and determined the respective T90values for these simulations.We found that the fractional difference,FD=1−(T90,s/T90,a)between the actual value,T90,a,(in the absence of noise)and the simulated one,T90,s,has a strong dependence on the peak rate(i.e.signal-to-noise ratio),such that FD can be as high as0.22at count rates of2600counts s−1.It is,however,less than0.05 for count rates greater than4800counts s−1.The variation of FD with respect to the pulse profile is insignifiing the results of our simulations we obtained a T90correction as a function of peak rate.We estimated the peak rate of each burst using a box-car averaging technique with a box width of1/512s.We then corrected T90values for the S/N effect using our correction function and these peak rates.Ourfinal data set thus comprises455SGR1900+14bursts with corrected T90dura-tions between9ms and2.36ms(Figure3,solid histogram).We were unable to determine statistically significant T90durations for187SGR1900+14bursts because there were less than∼40counts per event.The solid curve in Figure3is a bestfit log-Gaussian function to all T90values which peaks at93.9±0.2ms(σ=0.35±0.01;whereσis the width of the distribution in decades).The dashed histogram in Figure3is the distribution of T90values of single pulse bursts whose log-Gaussian mean is46.7±0.1ms(σ=0.21±0.01).Also in Figure3,the dash-dot histogram displays the T90distribution of multi-peaked bursts and it peaks at148.9±0.2ms(σ=0.26±0.02).For SGR1806−20bursts,we determined corrected T90values of190bursts which range between16ms and1.82s(49bursts were too weak to determine their T90values).The distribution of all T90values is shown in Figure 4(solid histogram).A log-Gaussianfit to this distribution yields a peak at161.8±0.2ms (σ=0.34±0.02).Similarly,a log-Gaussianfit to the T90distribution of the single pulse bursts(dashed lines in Figure4)peaks at88.1±0.1ms(σ=0.19±0.03)and afit to the multi-peaked bursts(dash-dot lines in Figure4)yields a peak at229.9±0.3ms(σ=0.32±0.03).In order to quantify the time profile symmetry of SGR bursts we determined the ratio of their rise times(t r,i.e.the interval between T90start time and peak time t p)to T90 durations of the single pulse events.In Figure5,we plot the distributions of these ratios for bursts with single pulse(solid histogram)and milti-peaks(dashed histogram)from SGR 1900+14(left)and those from SGR1806−20(right).The majority of single pulse events from both sources have t r/T90values less than0.3,showing that SGR bursts decay slower than they rise,or in other words that SGR pulse profiles are asymmetric.Wefind that the average values os the t r/T90ratios are0.29and0.27for SGR1900+14and SGR1806−20, respectively.Remarkably coincident average values,along with very similar distributions of t r/T90ratios of both sources suggest a similarity of the asymmetry in the temporal profiles of SGR bursts.3.2.Emission time(τ90)estimatesEmission time,τN,was introduced for cosmic GRBs as a complimentary temporal parameter to T90(Mitrofanov et al.1999).Emission time is the time over which afixed percentage,N%,of the total burst emission was recorded,starting from the peak of the event and moving downward influx.For each burst,we determined the average background level from the background intervals used in the T90estimating procedure.The background-subtracted count bins within the burst interval(i.e.from the end of pre-burst background range to the beginning of post-burst background range)were then ordered by decreasing count rate.Starting with the highest count rate bin(i.e.the peak),we added the counts of each successively weaker bin until90%of the total burst counts were accumulated.Theτ90 emission time of the burst is then the total time spanned by the accumulated bins.As with the T90parameter,when the S/N ratio of the burst is low,the measured emission time can be systematically smaller than the actual value(Mitrofanov et al.1999).In order to correct for this systematic error,we constructed numerous simulated profiles identical to those described in the previous section.We found that,similar to T90estimates,the FD between the actual and simulatedτ90emission times is strongly dependent on the peak rate. FD is∼0.26at count rates of2600counts s−1and less than0.06for count rates greater than4800counts s−1.Similar to the T90measurement,this effect is only weakly dependent on the temporal profile of the ing ourτ90correction function and the peak rates, we obtained the correctedτ90emission times.The corrected values range between4.6and 412.8ms for SGR1900+14bursts,and between4.8and559.2ms for SGR1806−20bursts. In Figure6,we show the distributions ofτ90emission times for SGR1900+14(left)and SGR1806−20(right).The dashed lines in both plots are the bestfit log-Gaussian curveswhich peak at49.6±0.1ms(σ=0.28±0.01)for SGR1900+14events,and at82.3±0.1 ms(σ=0.32±0.02)for SGR1806−20events.Theτ90emission times of bursts from both sources are on average shorter than T90durations.Note that the log-Gaussian mean values of T90distributions of single pulse bursts from both SGRs are quite similar to those of the τ90distributions.3.3.Duty cycles(δ90)As shown in the previous section,theτ90values of bursts are in general smaller than their T90values.The reason is thatτ90is not a measure of the actual duration of the burst when the morphology of the burst is complex(e.g.multiple peaks).Mitrofanov et al.(1999)suggested that the ratioτ90/T90can then be used to describe a duty cycle,δ90. We,therefore,determined theδ90parameters for all bursts of both sources(Figure7,left for SGR1900+14bursts,right for SGR1806−20bursts).Both distributions peak at aboutδ90∼0.45and their overall shapes are very similar,indicating a strong similarity of the type of burst emission in the two sources,although their overall duration distribution differ.3.4.Duration–Fluence–Hardness correlationsGutenberg and Richter(1956)showed that there exists a power law relation between the magnitude(which is related to the total energy involved)of earthquakes and the durations of strong ground-shaking at short distances from an earthquake epicenter.We investigated whether a similar correlation exists in SGR events using their total burst counts and their T90durations(as estimated in§3.1),for both SGR1806−20and SGR1900+14.For each set of bursts,we grouped the T90values into logarithmically spaced bins and determined the weighted mean value of total burst counts and T90durations for each bin.In Figure8,we show the plot of integral counts vs T90durations of SGR1806−20bursts.The crosses in this figure are the weighted means of each parameter;the errors on the mean-T90values denote the range of each bin while the errors on the mean counts are due to sample variance.The dark points are the individual measurements of the burst counts and T90values of single pulse events and the gray circles are those of multi-peaked bursts.Figure8shows that the integral counts and durations of SGR1806−20bursts are well correlated.To quantify this correlation,we determined the Spearman’s rank order correlation coefficient,ρ=0.91,and the probability of getting this value from a random data set,P=3.4×10−4.We furtherfit a power law model to the mean values(crosses)of the data using the least squares technique, which yields a power law index of1.05±0.16.Similarly,Figure9shows that the integral counts and T90durations of SGR1900+14 bursts are also correlated,havingρ=0.89and P=8.6×10−4.A power lawfit to the mean values of the data yields an index of0.91±0.07.The asterisk shown in the upper right portion of Figure9indicates where the precursor of the August29burst falls.This event was exceptionally long,bright and resembled the August27giantflare in various ways (Ibrahim et al.2000).A spectral line at6.4keV was reported during the precursor of event of August29burst with∼4σsignificance(Strohmayer&Ibrahim2000).If spectral lines are event intensity dependent,there are very few events in Figure9during which we would expect to see any lines.The detailed spectral analysis of all SGR bursts is underway.In conclusion wefind a good correlation between the duration and total counts(energy) of SGR bursts,quite similar to the one established for earthquakes.It is noteworthy that the burst countfluences of single pulse events(from both systems)span a range almost as wide as that offluences of multi-peaked bursts.In order to investigate burst spectral variations versusfluence,we calculated an event hardness ratio defined as the ratio of the total burst counts observed between10−60keV to those between2−10keV.Bursts with total counts less than∼50yielded statistically insignificant hardness ratios(<3σ),and therefore,were excluded from our analysis.For SGR1806−20bursts,we have159events with hardness ratios measured to>3σaccuracy. We divided the total counts of these events into logarithmically spaced bins and determined the weighted mean hardness ratio for each group.Figure10a shows that the SGR1806−20 burst hardness ratios are anti-correlated withfluence(ρ=−0.96,P=2.6×10−4).For385 SGR1900+14bursts,Figure10d shows a marginal anti-correlation between hardness and fluence(ρ=−0.89,P=4.6×10−3).Interestingly,wefind that although the hardness−fluence anti-correlation is evident in both sources,the SGR1806−20events are overall harder than the SGR1900+14ones.Our energy selection optimizes the PCA energy response,so that we are not affected by instrumental biases.Fenimore,Laros&Ulmer(1994)have performed a similar study of hardness ratio vs fluence for95SGR1806-20events detected with ICE.We cannot directly compare our results with this study for two reasons.The energy ranges over which hardness ratios are computed differ significantly;while we use(10−60keV)/(2−10keV),Fenimore et al.(1994)use (43.2−77.5keV)/(25.9−43.2keV).The PCA sensitivity drops significantly beyond25 keV,limiting the possibility of comparison.Furthermore,ourfluence range ends at2×10−7 ergs cm−2,just below the range described in Figure1of Fenimore et al.(1994).However, since Fenimore et al.report a constant trend above10−7ergs cm−2,one could postulate that the hardness ratio vsfluence trend we report,levels offat higherfluences(energies).We further investigated the hardness−fluence trend for burst morphology sub-sets,namely for single pulse and multi-peaked burst groups for each source.Figure10b exhibits hardness−fluence plot of single pulse SGR1806−20events and Figure10c shows that of multi-peaked bursts.We see that both sets display spectral softening as the burst count fluence increases.Similarly SGR1900+14events are shown in Figure10e and f.For this source,the hardness−fluence anti-correlation is significant only for single pulse bursts. It is important to note that the peak rates of most of the highestfluence bursts reach 105counts s−1(on1/1024s time scale)around which the PCA pulse pileup may become important.This effect can artificially harden the observed count spectrum at these rates. The spectral hardening or leveling offseen in the last bins of the plots of Figure10may well be due to the pulse pileup effect and should not be considered as intrinsic source property.We next investigated the relationship between the hardness and duration of SGR bursts. Although wefind hardness−fluence anti-correlations andfluence−duration correlations, both SGR1806−20(Figure11,squares)and SGR1900+14(Figure11,diamonds)burst hardness ratios are independent of the event durations.4.DiscussionOur study demonstrates that unlike the T90duration distribution of cosmic GRBs which shows a bi-modal trend with peaks at∼0.31s and∼37s(Kouveliotou et al.1993;Paciesas et al.1999),the T90distribution of SGR bursts displays a single peak which varies for each source:∼93ms and∼162ms for SGR1900+14and SGR1806−20,respectively.Wefind that the T90durations of single-pulse bursts from both SGRs form narrow distributions(compared to those of multi-peaked events)which peak at∼47ms and∼88 ms for SGR1900+14and SGR1806-20,respectively.These bright,hard,and short bursts are almost certainly powered by a sudden disturbance of the rigid neutron star crust,which transmits energy to the magnetosphere.Magnetic stresses,which force the star between distinct metastable equilibria,provide the most plausible source of energy for the giant flares,and by inference for the short bursts(TD95;Thompson&Duncan2001).The duration of the multi-peaked bursts is clearlyfixed by the time between successive releases of energy.The widths of the single pulse bursts could,in principle,be limited either by the rate of release of energy from the initial reservoir or,alternatively,by the time for the released energy to be converted to radiation through some intermediate reservoir.There have been various suggestions for such an intermediate storage mechanism:a hotfireball that is confined on closed magneticfield lines(TD95);a region of strong magnetic shear and high current density(Thompson et al.2000);or a persistent vibration of the star(Fatuzzo&Melia1994).Measurement of the burst rise time t r provides a discriminant between these possibilities. Wefind that t r is characteristically much shorter than the total duration,as defined by either T90orτ90.Moreover,the distribution of t r/T90is broad(both for single-pulsed bursts and multi-peaked events),which suggests that t r is not directly connected to cooling.For example,the X-ray luminosity of a trappedfireball in local thermodynamic equilibrium is proportional to its surface area,and is a weak function of its internal temperature(Thompson &Duncan1995).The characteristic radiative timescales are,in general,very short at the high spectral intensity of an SGR burst.Thus,the rise of the X-rayflux plausibly represents the initial injection of energy,although strictly t r only sets an upper bound to the injection timescale.While the overall shape of T90(both single and multi-peaked)and theτ90distribu-tions of both sources are quite similar(relatively consistent Gaussian widths),the peaks of both distributions for SGR1900+14occur at shorter durations compared to those for SGR1806−20.This systematic difference in the burst durations probably results from some differing intrinsic property of the sources,such as the strength of the magneticfield,or the size of the active region.Wefind a power law correlation between the total burst counts(fluence)and the duration of SGR bursts with a power law index around1.Similar behavior was noted for earthquakes by Gutenberg&Richter(1956).Within the context of earthquake mechanics,one way of defining duration(also known as“bracketed duration”[Bolt1973])is the time between thefirst and last5%excesses of g(gravitational acceleration on Earth)by the threshold acceleration of strong ground shaking.Recently,Lay&Wallace(1995)presented the power law correlation between the seismic moment(∝energy)and duration of122earthquakes with energies between3.5×1023erg and2.8×1026erg.The power lawfit to these events yields an index of3.03.An equally important constraint on the injection and cooling mechanisms comes from the anti-correlation between the hardness andfluence of the SGR bursts.Although very significant for SGR1806-20,this anti-correlation is much milder than that expected for black-body emission from a region of constant area;indeed,the trend of increasing hardness with lowerfluence is opposite to that expected for constant area emission.Two basic types of radiative mechanism could reproduce this trend.First,the emitting plasma could be in local thermodynamic equilibrium,which requires that its size(radiative area)should decrease at lowerfluences.An alternative possibility is that the spectral intensity of the radiationfield sits below that of a black body,and that the temperature of the emitting plasma is buffered within a narrow range.We consider each of these possibilities in turn.An SGR burst can be parameterized by the rate of injection of energy into the mag-netosphere,L inj,and the volume V of the injection region.When L inj is large and V is small enough,it is not possible to maintain a steady balance between heating and radiative cooling.The deposited energy is locked onto closed magneticfield lines of the neutron star, in a“trappedfireball”composed of photons and electron-positron pairs(TD95).This kind of event will tend to have a soft spectrum,because the injected energy has thermalized,and the plasma remains in LTE very close to its photosphere.The rise time is comparable to the time over which energy is initially injected,but the decay is limited by the rate of cooling through a thin radiative surface layer,which contracts toward the center of thefireball.The declining light curve of the27August1998giantflare can be accuratelyfit by such a model (Feroci et al.2001;Thompson&Duncan2001).A second burst from SGR1900+14on29August1998has been interpreted in this trappedfireball model(Ibrahim et al.2000).The main29August burst had a much shorter duration than the giantflare(∼3.5s versus∼400s).This bright component was followed by a much fainter pulsating tail(extending out to∼1000s)which provides direct evidence for heating and compression of the neutron star surface by thefireball.In this model,the short duration and high luminosity of the bright component require that the trappedfireball had an approximately planar geometry,as would be expected if the energy were released along an extended fault(Ibrahim et al.2000).As our results show,most single-pulsed bursts from SGR1900+14have a much shorter duration(40times smaller)than the bright component of the August29burst.It remains unclear,therefore,whether the trappedfireball model also applies to these much more fre-quent events.If it does apply,then the narrowness of the T90distribution(compared with the wide range of measuredfluences)also requires a planar geometry,because the cooling time is determined by the smallest dimension of thefireball.The radiative mechanism is somewhat different if the injection luminosity L inj lies below a critical value of∼1042(V1/3/10km)ergs s−1(assuming a spherical geometry;Thompson &Duncan2001).When the compactness is that low,it is possible to maintain a steady balance between the heating of a corona of electron-positron pairs,and radiative diffusion out of the corona(which one deduces must be optically thick to scattering).The central temperature of the corona is buffered in the range∼20−40keV,and remains higher at lower luminosities–just the trend observed for SGR1806-20(Fig.10).If this radiative model applies to the majority of single-pulsed events,then the constraints on the geometry of the active region are weaker,and one must consider alternative mechanisms for storing the injected energy,such as a persistent current driven by shearing motions in the neutron star crust(Thompson et al.2000).。