Testing

Types of Testing

There are several different types of testing used to test software. These include but not limited to white box testing, black box testing, positive testing, negative/defect testing, and metamorphic testing. Whitebox testing is testing the software while knowing the code that the software is made from(Sethi, 2022). Blackbox testing is testing the software directly with inputs and outputs (Sethi, 2022). Positive testing is a type of testing where you already know the output for the test cases. Negative/defect testing is a type of testing where you try to find ways to find mistakes in your software. Metamorphic testing is a type of testing where you get your answer based on the relationship between two or more inputs (Chen et al., 2019). Generally, it is a good idea to include all types of testing when software testing. This will help you cover all kinds of test cases that you might not have covered if you only used one type of testing. These are the types of testing that we used in our project, though our team mainly used positive testing. 

For more information on the different types of testing, links to more reading are below:



About Designing Tests

When designing test cases, we can improve test efficiency and coverage by approaching from the following perspectives.

 Black Box Testing:

Functional Testing: Test whether the software's functions meet user requirements by designing test cases based on the requirements specification.

Interface Testing: Verify if the user interface layout, interactions, and responses align with design requirements, covering various input scenarios and error handling.

Performance Testing: Design test cases to assess the system's performance under different load conditions, focusing on response time, throughput, and other performance metrics.

Compatibility Testing: Create test cases for various operating systems, browsers, devices, etc., to ensure the software functions correctly across different environments.


White Box Testing:

Unit Testing: Design test cases based on the code to test specific code units like functions, methods, and modules, ensuring each unit functions correctly.

Integration Testing: Test the interactions and integration between different modules or components through designed test cases.

Path Coverage Testing: Design test cases to cover various program execution paths, including statement coverage, decision coverage, and condition coverage.


 Boundary Testing: 

Develop test cases to verify the system's performance under different input scenarios, including minimum and maximum input values and boundary values, to ensure the system handles edge cases correctly.


Exception Testing

Design test cases to evaluate the system's capability to handle exceptional circumstances such as error inputs, exceptional conditions, and system crashes, ensuring proper recovery or handling of such situations.


Path Coverage Testing

Develop test cases to cover diverse program execution paths, including statement coverage, decision coverage, and condition coverage, aiming to ensure comprehensive code path coverage.


E2E

Possible Methods for End-to-End Automation and Reduced Manual Intervention

During the communication with NIO, stakeholders raised a very insightful question, which is that since our goal is to achieve a higher level of automation and end-to-end, simply providing simulated input data to the given sensors during testing is definitely not enough. However, if there is manual intervention to steer the steering wheel, it contradicts the original intention of high automation.

So we conducted some research on how to handle similar situations. The results are as follows:

Automated Control Devices

Automated control devices are a solution that utilizes technologies such as mechanical arms and motors to achieve different angles of rotation for the steering wheel. By programming and setting parameters accurately, these devices can mimic human hand movements, providing an automated steering wheel input method without the need for manual intervention.

During testing, automated control devices can precisely simulate driving scenarios, including different angles of turning and sharp corners. This ensures the accuracy and reliability of the tests while significantly increasing testing efficiency. Using automated control devices for steering wheel control can eliminate the influence of human factors, resulting in more stable and consistent test results.


Feedback Control Integrated with Sensors

A feedback control system integrated with sensors can provide real-time feedback during testing to adjust the input commands for the steering wheel. For example, sensors like gyroscopes and accelerometers can measure the actual steering wheel angle and make real-time adjustments based on this data. It is feasible to consider using visual sensors to monitor the vehicle's driving status. By real-time monitoring the motion and position information of the vehicle, the feedback control system can adjust the steering wheel input promptly to adapt to different driving conditions.

The sensor-based feedback control method enhances the accuracy and reliability of automated testing. Sensors can provide precise measurements of the steering wheel angle, ensuring consistency between actual and expected inputs. Through feedback control, the system can promptly adjust steering wheel inputs to better simulate real driving conditions.


Data-Driven Testing and Machine Learning

Data-driven testing methods utilize machine learning algorithms and historical driving data to generate steering wheel angle data for testing purposes. By analyzing historical driving data and patterns, artificial intelligence models can create simulated steering wheel inputs to enable comprehensive testing across different scenarios.

This data-driven approach reduces the need for manual intervention and increases the scope of testing coverage. By leveraging machine learning algorithms, the model can learn and analyze extensive driving data to generate reasonable steering wheel inputs. This allows for comprehensive testing in various driving scenarios, including regular driving and emergency situation responses.

Due to the high cost of implementing machine learning in our project and the limitations of sensor-integrated feedback control, our focus remains primarily on automated control devices. We have conducted in-depth research on bench testing and simplifying bench testing procedures.


Testing Research

Below is the research we wrote about testing.

6