2024年4月26日发(作者:)

.

软件测试常用单词:

1.静态测试:Non-Execution-Based Testing或Static testing

代码走查:Walkthrough

代码审查:Code Inspection

技术评审:Review

2.动态测试:Execution-Based Testing

3.白盒测试:White-Box Testing

4.黑盒测试:Black-Box Testing

5.灰盒测试:Gray-Box Testing

6.软件质量保证SQA:Software Quality Assurance

7.软件开发生命周期:Software Development Life Cycle

8.冒烟测试:Smoke Test

9.回归测试:Regression Test

10.功能测试:Function Testing

11.性能测试:Performance Testing

12.压力测试:Stress Testing

13.负载测试:Volume Testing

14.易用性测试:Usability Testing

15.安装测试:Installation Testing

16.界面测试:UI Testing

17.配置测试:Configuration Testing

18.文档测试:Documentation Testing

19.兼容性测试:patibility Testing

20.安全性测试:Security Testing

21.恢复测试:Recovery Testing

22.单元测试:Unit Test

23.集成测试:Integration Test

24.系统测试:System Test

25.验收测试:Acceptance Test

26.测试计划应包括:

测试对象:The Test Objectives,

测试X围: The Test Scope,

测试策略: The Test Strategy

测试方法: The Test Approach,

测试过程: The test procedures,

测试环境: The Test Environment,

测试完成标准:The test pletion criteria

测试用例:The Test Cases

测试进度表:The Test Schedules

风险:Risks

Etc

27.主测试计划: a master test plan

28.需求规格说明书:The Test Specifications

29.需求分析阶段:The Requirements Phase

1 / 5

.

30.接口:Interface

31.最终用户:The End User

31.正式的测试环境:Formal Test Environment

32.确认需求:Verifying The Requirements

33.有分歧的需求:Ambiguous Requirements

34.运行和维护:Operation and Maintenance.

35.可复用性:Reusability

36.可靠性: Reliability/Availability

37.电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers>

38.要从以下几方面测试软件:

正确性:Correctness

实用性:Utility

性能:Performance

健壮性:Robustness

可靠性:Reliability

关于Bugzilla:

1.Bug按严重程度〔Severity〕分为:

Blocker,阻碍开发和/或测试工作

Critical,死机,丢失数据,内存溢出

Major,较大的功能缺陷

Normal,普通的功能缺陷

Minor,较轻的功能缺陷

Trivial,产品外观上的问题或一些不影响使用的小毛病,如菜单或对话框中的文字拼写或字体

问题等等

Enhancement,建议或意见

2.Bug按报告状态分类〔Status〕

待确认的〔Unconfirmed〕

新提交的〔New〕

已分配的〔Assigned〕

问题未解决的〔Reopened〕

待返测的〔Resolved〕

待归档的〔Verified〕

已归档的〔Closed〕

3.Bug处理意见〔Resolution〕

已修改的〔Fixed〕

不是问题〔Invalid〕

无法修改〔Wontfix〕

以后版本解决〔Later〕

保留〔Remind〕

重复〔Duplicate〕

无法重现〔Worksforme〕

Testing activity

2 / 5

.

Testing activity

Testing is an integral ponent of the software process

Testing is a critical element of software quality assurance

Testing is an activity that must be carried out throughout the software develop

ment life cycle

Software Testing Principles:

All tests should be traceable to customer requirements.

Tests should be planned long before testing begins

The Pareto principle applies to software testing. <80/20 rule>

Testing should begin "in the small〞 and progress toward testing "in the large.〞

Exhaustive testing is not possible.

To be most effective, testing should be conducted by an independent third party.

Attributes of A "Good〞 Test

A good test has a high probability of finding an error.

A good test is not redundant.

A good test should be neither too simple nor too plex.

 Else?

What Should Be Tested?

Correctness

Utility

Performance

Robustness

Reliability

Correctness

The extent to which a program satisfies its specification and fulfills the customer’s

mission objectives.

If input that satisfies the input specifications is provided and the product is given al

l the resources it needs, then the product is correct if the output satisfies the outp

ut specification.

If a product satisfies its specification, then this product is correct.

Questions:

Suppose a product has been tested successfully against a broad variety of test dat

a. Does this mean that the product is acceptable?

UTILITY

Utility is the extent to which a user’s needs are met when a correct product is use

d under condition permitted by its specifications.

It focus on how easy the product is to use, whether the product performs useful fu

nctions, and whether the product is cost effective pared to peting products.

If the product is not cost effective, there is no point in buying it.

3 / 5

.

And unless the product is easy to use, it will not be used at all or it will be used

incorrectly.

Therefore, when considering buying an existing product, the utility of the product sh

ould be tested first; and if the product fails on that score, testing should stop.

Performance

It is the extent to which the product meets its constraints with regard to response t

ime or space requirements.

Performance is measured by processing speed, response time, resource consumpti

on, throughput and efficiency.

Performance: For example, a nuclear reactor control system may have to sample t

he temperature of the core and process the data every 10th of a second. If the sy

stem is not fast enough to be able to handle interrupts from the temperature senso

r every 10th of a second, then data will be lost and there is no way of ever recov

ering the data; the next time that the system receives temperature data, they will b

e the current temperature, not the reading that was missed. If the reactor is on the

point of a meltdown, then it is critical that all relevant information be both receive

d and processed as laid down in the specifications.

With all real-time system, the performance must meet every time constraint listed in

the specifications.

Robustness

Robustness essentially is a function of a number of factors, such as the range

of operating conditions, the possibility of unacceptable results with valid input, and

the acceptability of effects when the product is given invalid input.

A product with a wide range of permissible operating condition is more robust

than a product that is more restrictive.

It is difficult to e up with a precise definition…

A robust product should not yield unacceptable results when the input satisfies

its specifications.

For example, when the tester gives a system with a invalid data,

the system responds with a message such as "Incorrect data, try again〞, it is m

ore robust than a system that crashes whenever the data deviate even slightly fro

m what is required.

Reliability

If a program repeatedly and frequently fails to perform, it matters little whether

other software quality factors are acceptable.

Software Reliability is defined in statistical terms as "the probability of failure-fr

4 / 5

.

ee operation of a puter program in a specified environment for a specified time.〞

It is necessary to know how often the product fails.

When a product fails, an important issue is how long it takes, on average , to

repair it.

Measure of Reliability: MTBF = MTTF + MTTR

MTBF: mean-time-between-failure

MTTF: mean-time-to-failure

MTTR: mean-time-to-repair

Software availability is the probability that a program is operating according to

requirements at a given point in time.

Measure of Reliability: MTBF = MTTF + MTTR

Measure of Availability:

Availability = [MTTF/] * 100%

How can it be know when to stop testing?

This can be difficult to determine. Many modern software applications are so plex,

and run in such an interdependent environment, that plete testing can never be do

ne. mon factors in deciding when to stop are:

Deadlines

Test budget depleted

Test cases pleted with certain percentage passed

Coverage of code/functionality/requirements reaches a specified point

Bug rate falls below a certain level

What if there isn’t enough time for thorough testing?

Use risk analysis to determine where testing should be focused. Risk analysis is a

ppropriate to most software development projects. This requires judgment skills, mo

n sense, and experience. Considerations can include:

Which functionality is most important to the project's intended purpose?

Which functionality is most visible to the user?

Which functionality has the largest safety/financial impact?

Which aspects of similar/related previous projects had large maintenance expenses?

Which parts of the code are most plex, and thus most subject to errors?

5 / 5