21 has three methods for detecting objects:
21 uses multiple attributes to find an object, some are dependent on the way the application is developed (e.g. resource ID, content description), and some which are independent of the way the application is developed (e.g. bounds of the element, position on the screen). 21 automatically picks up any IDs, if they exist, calculates additional attributes and combines all of them together through a scoring engine, to find the right element.
21 does not use static xPath or CSS locators. Instead that analysis is done in realtime during the execution of each and every step (hence there is not static version of the code).
Users are not required to specify locators. Our autonomous locators support majority of the applications and are also resilient to changes in the app.
Autonomous detection method is the default detection method used by 21.
For applications that have no unique way of finding objects become of shadow object tree or recycled objects with no IDs, we added the ability to visually analyze the view and find text using Optical Character Recognition (OCR). If autonomous detection method does not identify the object (mostly in Flutter based applications) use Text Analysis and type in the required text under "Element Text (Optional)".
Visual Analysis, similar to "Text Analysis", uses computer vision to find graphical elements. If autonomous detection method does not identify the object (mostly in Flutter based applications) use Visual Analysis. An image of the object identified and the graphical representation of the element to be located will show inside the panel.
Click here to learn more about the use of computer vision.