Live Classes: Upskill your knowledge Now!
Chat NowCreated by - Admin s
A list of top frequently asked AWS Interview Questions and answers are given below.1) What is AWS?AWS stands for Amazon Web Services. It is a service which is provided by the Amazon that uses distributed IT infrastructure to provide different IT resources on demand. It provides different services such as an infrastructure as a service, platform as a service, and software as a service.2) What are the components of AWS?The following are the main components of AWS are:Simple Storage Service: S3 is a service of aws that stores the files. It is object-based storage, i.e., you can store the images, word files, pdf files, etc. The size of the file that can be stored in S3 is from 0 Bytes to 5 TB. It is an unlimited storage medium, i.e., you can store the data as much you want. S3 contains a bucket which stores the files. A bucket is like a folder that stores the files. It is a universal namespace, i.e., name must be unique globally. Each bucket must have a unique name to generate the unique DNS address.Elastic Compute Cloud: Elastic Compute Cloud is a web service that provides resizable compute capacity in the cloud. You can scale the compute capacity up and down as per the computing requirement changes. It changes the economics of computing by allowing you to pay only for the resources that you actually use.Elastic Block Store: It provides a persistent block storage volume for use with EC2 instances in aws cloud. EBS volume is automatically replicated within its availability zone to prevent the component failure. It offers high durability, availability, and low-latency performance required to run your workloads.CloudWatch: It is a service which is used to monitor all the AWS resources and applications that you run in real time. It collects and tracks the metrics that measure your resources and applications. If you want to know about the CloudWatch in detail, then click on the below link: Click hereIdentity Access Management: It is a service of aws used to manage users and their level of access to the aws management console. It is used to set users, permissions, and roles. It allows you to grant permission to the different parts of the aws platform. If you want to know about the IAM, then click the below link: Click hereSimple Email Service: Amazon Simple Email Service is a cloud-based email sending service that helps digital marketers and application developers to send marketing, notification, and transactional emails. This service is very reliable and cost-effective for the businesses of all the sizes that want to keep in touch with the customers.Route53: It is a highly available and scalable DNS (Domain Name Service) service. It provides a reliable and cost-effective way for the developers and businesses to route end users to internet applications by translating domain names into numeric IP addresses. If you want to know more about Route53 in detail, then click on the link given below: Click here3) What are Key-pairs?An Amazon EC2 uses public key cryptography which is used to encrypt and decrypt the login information. In public key cryptography, the public key is used to encrypt the information while at the receiver's side, a private key is used to decrypt the information. The combination of a public key and the private key is known as key-pairs. Key-pairs allows you to access the instances securely.Play Videox4) What is S3?S3 is a storage service in aws that allows you to store the vast amount of data. To know more about S3, click on the link given below:Click here5) What are the pricing models for EC2 instances?There are four pricing models for EC2 instances:On-Demand instanceOn-Demand pricing is also known as pay-as-you-go. Pay-as-you-go is a pricing model that allows you to pay only for those resources that you use.You need to pay for the compute capacity by per hour or per second that depends on which instances you run.On-Demand instance does not require any upfront payments.While using On-Demand instance, you can increase or decrease the compute capacity based on the requirements of your application.On-Demand instances are recommended for those applications which are of short term and unpredictable workloads.Users that want low cost and flexibility on EC2 instances with no upfront payments.On-Demand instances are used for those applications which have been developed or tested on EC2 for the first time.Reserved instanceReserved instance is the second most important pricing model that reduces the overall cost of your AWS environment by making an upfront payment for those services that you know will be used in the future.Reserved instances provide a discount of up to 75% as compared to On-Demand instance.Reserved instances are assigned to a specific Availability zone that reserves the compute capacity for you so that you can use whenever you need.Reserved instances are mainly recommended for those applications that have steady state and require reserve capacity.Customers who want to use the EC2 over 1 to 3 term can use the reserved instance to reduce the overall computing costs.Spot instanceSpot instances consist of unused capacity which is available at a highly discounted rate.It offers up to 90% discount as compared to On-Demand instance.Spot instances are mainly recommended for those applications which have flexible start and end times.It is useful when applications require computing capacity at a very low price.It is useful when applications require additional amount of computing capacity at an urgent need.Dedicated HostsIt is a physical EC2 server which is dedicated for your use. It reduces the overall costs by providing you a VPC that comprise of a dedicated hardware.6) What is AWS Lambda?AWS Lambda is a compute service that runs your code without managing servers. Lambda function runs your code whenever needed. You need to pay only when your code is running. If you want to know more about the AWS Lambda, then click on the link shown below:Click Here7) How many buckets can be created in S3?By default, you can create up to 100 buckets.8) What is Cross Region Replication?Cross Region Replication is a service available in aws that enables to replicate the data from one bucket to another bucket which could be in a same or different region. It provides asynchronous copying of objects, i.e., objects are not copied immediately. If you want to know more about the Cross Region Replication, then click on the link shown below:Click Here9) What is CloudFront?CloudFront is a computer delivery network which consists of distributed servers that delivers web pages and web content to a user based on the geographic locations of a user. If you want to know more about the CloudFront, then click on the link shown below:Click Here10) What are Regions and Availability Zones in aws?Regions: A region is a geographical area which consists of 2 or more availability zones. A region is a collection of data centers which are completely isolated from other regions.Availability zones: An Availability zone is a data center that can be somewhere in the country or city. Data center can have multiple servers, switches, firewalls, load balancing. The things through which you can interact with the cloud reside inside the Data center.If you want to know more about the Availability zone and region, then click on the link shown below:Click Here11) What are edge locations in aws?Edge locations are the endpoints in aws used for caching content. If you want to know more about the edge locations, then click on the link shown below:Click Here12) What is the minimum and maximum size that you can store in S3?The minimum size of an object that you can store in S3 is 0 bytes and the maximum size of an object that you can store in S3 is 5 TB.13) What are EBS Volumes?Elastic Block Store is a service that provides a persistent block storage volume for use with EC2 instances in aws cloud. EBS volume is automatically replicated within its availability zone to prevent from the component failure. It offers high durability, availability, and low-latency performance required to run your workloads. . If you want to know more about the EBS Volumes, then click on the link shown below:Click Here14) What is Auto Scaling?Auto Scaling is a feature in aws that automatically scales the capacity to maintain steady and predictable performance. While using auto scaling, you can scale multiple resources across multiple services in minutes. If you are already using Amazon EC2 Auto- scaling, then you can combine Amazon EC2 Auto-Scaling with the Auto-Scaling to scale additional resources for other AWS services.Benefits of Auto ScalingSetup Scaling QuicklyIt sets the target utilization levels of multiple resources in a single interface. You can see the average utilization level of multiple resources in the same console, i.e., you do not have to move to the different console.Make Smart Scaling DecisionsIt makes the scaling plans that automate how different resources respond to the changes. It optimizes the availability and cost. It automatically creates the scaling policies and sets the targets based on your preference. It also monitors your application and automatically adds or removes the capacity based on the requirements.Automatically maintain performanceAuto Scaling automatically optimize the application performance and availability even when the workloads are unpredictable. It continuously monitors your application to maintain the desired performance level. When demand rises, then Auto Scaling automatically scales the resources.15) What is AMI?AMI stands for Amazon Machine Image. It is a virtual image used to create a virtual machine within an EC2 instance. If you want to know more about the AMI, then click on the link shown below:Click Here16) Can a AMI be shared?Yes, an AMI can be shared.17) What is an EIP?EIP (Elastic IP address) is a service provided by an EC2 instance. It is basically a static IP address attached to an EC2 instance. This address is associated with your AWS account not with an EC2 instance. You can also disassociate your EIP address from your EC2 instance and map it to another EC2 instance in your AWS account.Let's understand the concept of EIP through an example:Suppose we consider the website www.javatpoint.com points to the instance which has a public IP address. When instance is restarted, then AWS takes another public IP address from the pool and the previous public IP address is no longer valid. Due to this reason, the original link is no longer available between the website and EC2 instance. To overcome from such situation, Elastic IP address or static address is used which does not change.18) What are the different storage classes in S3?Storage classes are used to assist the concurrent loss of data in one or two facilities. Each object in S3 is associated with some storage class. Amazon S3 contains some storage classes in which you can store your objects. You can choose a storage class based on your requirements and these storage classes offer high durability. To know more about the storage classes and its types, click on the link given below:Click Here19) How can you secure the access to your S3 bucket?S3 bucket can be secured in two ways:ACL (Access Control List)ACL is used to manage the access of resources to buckets and objects. An object of each bucket is associated with ACL. It defines which AWS accounts have granted access and the type of access. When a user sends the request for a resource, then its corresponding ACL will be checked to verify whether the user has granted access to the resource or not.When you create a bucket, then Amazon S3 creates a default ACL which provides a full control over the AWS resources.Bucket PoliciesBucket policies are only applied to S3 bucket. Bucket policies define what actions are allowed or denied. Bucket policies are attached to the bucket not to an S3 object but the permissions define in the bucket policy are applied to all the objects in S3 bucket.The following are the main elements of Bucket policy:SidA Sid determines what the policy will do. For example, if an action that needs to be performed is adding a new user to an Access Control List (ACL), then the Sid would be AddCannedAcl. If the policy is defined to evaluate IP addresses, then the Sid would be IPAllow.Effect: An effect defines an action after applying the policy. The action could be either to allow an action or to deny an action.PrincipalA Principal is a string that determines to whom the policy is applied. If we set the principal string as '*', then the policy is applied to everyone, but it is also possible that you can specify individual AWS account.ActionAn Action is what happens when the policy is applied. For example, s3:Getobject is an action that allows to read object data.ResourceThe Resource is a S3 bucket to which the statement is applied. You cannot enter a simply bucket name, you need to specify the bucket name in a specific format. For example, the bucket name is javatpoint-bucket, then the resource would be written as "arn:aws:s3""javatpoint-bucket/*".20) What are policies and what are the different types of policies?Policy is an object which is associated with a resource that defines the permissions. AWS evaluate these policies when user makes a request. Permissions in the policy determine whether to allow or to deny an action. Policies are stored in the form of a JSON documents.AWS supports six types of policies:Identity-based policiesResource-based policiesPermissions boundariesOrganizations SCPsAccess Control ListsSession policiesIdentity-based policiesIdentity-based policies are the permissions stored in the form of JSON format. This policy can be attached to an identity user, group of users or role. It determines the actions that the users can perform, on which resources, and under what conditions.Identity-based policies are further classified into two categories:Managed Policies: Managed Policies are the identity-based policies which can be attached to multiple users, groups or roles. There are two types of managed policies:AWS Managed PoliciesAWS Managed Policies are the policies created and managed by AWS. If you are using the policies first time, then we recommend you to use AWS Managed Policies.Custom Managed PoliciesCustom Managed Policies are the identity-based policies created by user. It provides more precise control over the policies than AWS Managed Policies.Inline PoliciesInline Policies are the policies created and managed by user. These policies are encapsulated directly into a single user, group or a role.Resource-Based PoliciesResource-based policies are the policies which are attached to the resource such as S3 bucket. Resource-based policies define the actions that can be performed on the resource and under what condition, these policies can be applied.Permissions boundariesPermissions boundaries are the maximum permissions that identity-based policy can grant to the entity.Service Control Policies (SCPs)Service Control Policies are the policies defined in a JSON format that specify the maximum permissions for an organization. If you enable all the features in an Organization, then you can apply Service Control Policies to any or all of your AWS accounts. SCP can limit the permission on entities in member accounts as well as AWS root user account.Access Control Lists (ACLs)ACL defines the control that which principals in another AWS account can access the resource. ACLs cannot be used to control the access of a principal in a different AWS account. It is the only policy type which does not have the JSON policy document format.21) What are different types of instances?Following are the different types of instances:General Purpose Instance typeGeneral purpose instances are the instances mainly used by the companies. There are two types of General Purpose instances: Fixed performance (eg. M3 and M4) and Burstable performance (eg. T2). Some of the sectors use this instance such as Development environments, build servers, code repositories, low traffic websites and web applications, micro-services, etc.Following are the General Purpose Instances:T2 instances: T2 instances are the instances that receive CPU credits when they are sitting idle and they use the CPU credits when they are active. These instances do not use the CPU very consistently, but it has the ability to burst to a higher level when required by the workload.M4 instances: M4 instances are the latest version of General purpose instances. These instances are the best choice for managing memory and network resources. They are mainly used for the applications where demand for the micro-servers is high.M3 instances: M3 instance is a prior version of M4. M4 instance is mainly used for data processing tasks which require additional memory, caching fleets, running backend servers for SAP and other enterprise applications.Compute Optimized Instance typeCompute Optimized Instance type consists of two instance types: C4 and C3.C3 instance: C3 instances are mainly used for those applications which require very high CPU usage. These instances are mainly recommended for those applications that require high computing power as these instances offer high performing processors.C4 instance: C4 instance is the next version of C3 instance. C4 instance is mainly used for those applications that require high computing power. It consists of Intel E5-2666 v3 processor and use Hardware virtualization. According to the AWS specifications, C4 instances can run at a speed of 2.9 GHz, and can reach to a clock speed of 3.5 GHz.GPU InstancesGPU instances consist of G2 instances which are mainly used for gaming applications that require heavy graphics and 3D application data streaming. It consists of a high-performance NVIDIA GPU which is suitable for audio, video, 3D imaging, and graphics streaming kinds of applications. To run the GPU instances, NVIDIA drivers must be installed.Memory Optimized InstancesMemory Optimized Instances consists of R3 instances which are designed for memory- intensive applications. R3 instance consists of latest Intel Xeon lvy Bridge processor. R3 instance can sustain a memory bandwidth of 63000 MB/sec. R3 instance offers a high- performance databases, In memory analytics, and distributed memory caches.Storage Optimized InstancesStorage Optimized Instances consist of two types of instances: I2 and D2 instances.I2 instance: It provides heavy SSD which is required for the sequential read, and write access to a large data sets. It also provides random I/O operations to your applications. It is best suited for the applications such as high-frequency online transaction processing systems, relational databases, NoSQL databases, Cache for in-memory databases, Data warehousing applications and Low latency Ad- Tech serving applications.D2 instance: D2 instance is a dense storage instance which consists of a high-frequency Intel Xeon E5-2676v3 processors, HDD storage, High disk throughput.22) What is the default storage class in S3?The default storage class is Standard Frequently Accessed.23) What is a snowball?Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of aws cloud. If you want to know more about the Snowball, click on the link given below:Click Here24) Difference between Stopping and Terminating the instances?Stopping: You can stop an EC2 instance and stopping an instance means shutting down the instance. Its corresponding EBS volume is still attached to an EC2 instance, so you can restart the instance as well.Terminating: You can also terminate the EC2 instance and terminating an instance means you are removing the instance from your AWS account. When you terminate an instance, then its corresponding EBS is also removed. Due to this reason, you cannot restart the EC2 instance.25) How many Elastic IPs can you create?5 elastic IP addresses that you can create per AWS account per region.26) What is a Load Balancer?Load Balancer is a virtual machine that balances your web application load that could be Http or Https traffic that you are getting in. It balances a load of multiple servers so that no web server gets overwhelmed. To know more, click on the link given below:Click Here27) What is VPC?VPC stands for Virtual Private Cloud. It is an isolated area of the AWS cloud where you can launch AWS resources in a virtual network that you define. It provides a complete control on your virtual networking environment such as selection of an IP address, creation of subnets, configuration of route tables and network gateways. To know more about VPC, click on the link given below:Click Here28) What is VPC peering connection?A VPC peering connection is a networking connection that allows you to connect one VPC with another VPC through a direct network route using private IP addresses.By using VPC peering connection, instances in different VPC can communicate with each other as if they were in the same network.You can peer VPCs in the same account as well as with the different AWS accountTo know more about, click on the link given below: Click Here29) What are NAT Gateways?NAT stands for Network Address Translation. It is an aws service that enables to connect an EC2 instance in private subnet to the internet or other AWS services. If you want to know more about NAT Gateways, click on the link shown below:Click Here30) How can you control the security to your VPC?You can control the security to your VPC in two ways:Security GroupsIt acts as a virtual firewall for associated EC2 instances that control both inbound and outbound traffic at the instance level. To know more about Security Groups, click on the link given below: Click HereNetwork access control lists (NACL)It acts as a firewall for associated subnets that control both inbound and outbound traffic at the subnet level. To know more about NACL, click on the link given below: Click Here31) What are the different database types in RDS?Following are the different database types in RDS:Amazon AuroraIt is a database engine developed in RDS. Aurora database can run only on AWS infrastructure not like MySQL database which can be installed on any local device. It is a MySQL compatible relational database engine that combines the speed and availability of traditional databases with the open source databases. To know more about Amazon Aurora, click on the link given below: Click HerePostgre SQLPostgreSQL is an open source relational database for many developers and startups.It is easy to set up, operate, and can also scale PostgreSQL deployments in the cloud.You can also scale PostgreSQL deployments in minutes with cost-efficient.PostgreSQL database manages time-consuming administrative tasks such as PostgreSQL software installation, storage management, and backups for disaster recovery.MySQLIt is an open source relational database.It is easy to set up, operate, and can also scale MySQL deployments in the cloud.By using Amazon RDS, you can deploy scalable MySQL servers in minutes with cost-efficient.MariaDBIt is an open source relational database created by the developers of MySQL.It is easy to set up, operate, and can also scale MariaDB server deployments in the cloud.By using Amazon RDS, you can deploy scalable MariaDB servers in minutes with cost-efficient.It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.OracleIt is a relational database developed by Oracle.It is easy to set up, operate, and can also scale Oracle database deployments in the cloud.You can deploy multiple editions of Oracle in minutes with cost-efficient.It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.You can run Oracle under two different licensing models: "License Included" and "Bring Your Own License (BYOL)". In License Included service model, you do need have to purchase the Oracle license separately as it is already licensed by AWS. In this model, pricing starts at $0.04 per hour. If you already have purchased the Oracle license, then you can use the BYOL model to run Oracle databases in Amazon RDS with pricing starts at $0.025 per hour.SQL ServerSQL Server is a relational database developed by Microsoft.It is easy to set up, operate, and can also scale SQL Server deployments in the cloud.You can deploy multiple editions of SQL Server in minutes with cost-efficient.It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.32) What is Redshift?Redshift is a fast, powerful, scalable and fully managed data warehouse service in the cloud.It provides ten times faster performance than other data warehouse by using machine learning, massively parallel query execution, and columnar storage on high-performance disk.You can run petabytes of data in Redshift datawarehouse and exabytes of data in your data lake built on Amazon S3.To know more about Amazon Redshift, click on the link given below: Click Here33) What is SNS?SNS stands for Simple Notification Service. It is a web service that provides highly scalable, cost-effective, and flexible capability to publish messages from an application and sends them to other applications. It is a way of sending messages. If you want to know more about SNS, click on the link given below:Click Here34) What are the different types of routing policies in route53?Following are the different types of routing policies in route53:Simple Routing PolicySimple Routing Policy is a simple round-robin policy which is applied to a single resource doing the function for the domain, For example, web server is sending the content to a website where web server is a single resource.It responds to DNS queries based on the values present in the resource.Weighted Routing PolicyWeighted Routing Policy allows you to route the traffic to different resources in specified proportions. For example, 75% in one server, and 25% in another server.Weights can be assigned in the range from 0 to 255.Weight Routing policy is applied when there are multiple resources accessing the same function. For example, web servers accessing the same website. Each web server will be given a unique weight number.Weighted Routing Policy associates the multiple resources to a single DNS name.Latency-based Routing PolicyLatent-based Routing Policy allows Route53 to respond to the DNS query at which data center gives the lowest latency.Latency-based Routing policy is used when there are multiple resources accessing the same domain. Route53 will identify the resource that provides the fastest response with lowest latency.Failover Routing PolicyGeolocation Routing Policy35) What is the maximum size of messages in SQS?The maximum size of message in SQS IS 256 KB.36) Differences between Security group and Network access control list?Security GroupNACL (Network Access Control List)It supports only allow rules, and by default, all the rules are denied. You cannot deny the rule for establishing a connection.It supports both allow and deny rules, and by default, all the rules are denied. You need to add the rule which you can either allow or deny it.It is a stateful means that any changes made in the inbound rule will be automatically reflected in the outbound rule. For example, If you are allowing an incoming port 80, then you also have to add the outbound rule explicitly.It is a stateless means that any changes made in the inbound rule will not reflect the outbound rule, i.e., you need to add the outbound rule separately. For example, if you add an inbound rule port number 80, then you also have to explicitly add the outbound rule.It is associated with an EC2 instance.It is associated with a subnet.All the rules are evaluated before deciding whether to allow the traffic.Rules are evaluated in order, starting from the lowest number.Security Group is applied to an instance only when you specify a security group while launching an instance.NACL has applied automatically to all the instances which are associated with an instance.It is the first layer of defense.It is the second layer of defense.37) What are the two types of access that you can provide when you are creating users?There are two types of access:Console AccessIf the user wants to use the Console Access, a user needs to create a password to login in an AWS account.Programmatic accessIf you use the Programmatic access, an IAM user need to make an API calls. An API call can be made by using the AWS CLI. To use the AWS CLI, you need to create an access key ID and secret access key.38) What is subnet?When large section of IP address is divided into smaller units is known as subnet.A Virtual Private Cloud (VPC) is a virtual network provided to your AWS account. When you create a virtual cloud, you need to specify the IPv4 addresses which is in the form of CIDR block. After creating a VPC, you need to create the subnets in each availability zone. Each subnet has a unique ID. When launching instances in each availability zone, it will protect your applications from the failure of a single location.39) Differences between Amazon S3 and EC2?S3It is a storage service where it can store any amount of data.It consists of a REST interface and uses secure HMAC-SHA1 authentication keys.EC2It is a web service used for hosting an application.It is a virtual machine which can run either Linux or Windows and can also run the applications such as PHP, Python, Apache or other databases.40) Can you establish a peering connection to a VPC in a different region?No, it's not possible to establish a peering connection to a VPC in a different region. It's only possible to establish a peering connection to a VPC in the same region.41) How many subnets can you have per VPC?You can have 200 subnets per VPC.42) When EC2 officially launched?EC2 was officially launched in 2006.43) What is Amazon Elasticache?An Amazon Elasticache is a web service allows you to easily deploy, operate, and scale an in-memory cache in the cloud. To know more about the Amazon Elasticache, click on the link given below:Click Here44) What are the types of AMI provided by AWS?There are two types of AMI provided by AWS:Instance store backedAn instance-store backed is an EC2 instance whose root device resides on the virtual machine's hard drive.When you create an instance, then AMI is copied to the instance.Since "instance store-backed" instances root device is stored in the virtual machine's hard drive, so you cannot stop the instance. You can only terminate the instance, and if you do so, the instance will be deleted and cannot be recovered.If the virtual machine's hard drive fails, then you can lose your data.You need to leave this instance-store instance in a running state until you are completely done with it.You will be charged from the moment when your instance is started until your instance is terminated.EBS backedAn "EBS backed" instance is an EC2 instance that uses EBS volume as a root deviceEBS volumes are not tied to a virtual hardware, but they are restricted to an availability zone. This means that EBS volume is moved from one machine to another machine within the same availability zone.If the virtual machine's fails, then the virtual machine can be moved to another virtual machine.The main advantage of "EBS backed" over "instance store-backed" instances is that it can be stopped. When an instance is in a stopped state, then EBS volume can be stored for a later use. The virtual machine is used for some other instance. In stopped state, you are not charged for the EBS storage.45) What is Amazon EMR?An Amazon EMR stands for Amazon Elastic MapReduce. It is a web service used to process the large amounts of data in a cost-effective manner. The central component of an Amazon EMR is a cluster. Each cluster is a collection of EC2 instances and an instance in a cluster is known as node. Each node has a specified role attached to it known as a node type, and an Amazon EMR installs the software components on node type.Following are the node types:Master nodeA master node runs the software components to distribute the tasks among other nodes in a cluster. It tracks the status of all the tasks and monitors the health of a cluster.Core nodeA core node runs the software components to process the tasks and stores the data in Hadoop Distributed File System (HDFS). Multi-node clusters will have at least one core node.Task nodeA task node with software components processes the task but does not store the data in HDFS. Task nodes are optional.46) How to connect EBS volume to multiple instances?You cannot connect the EBS volume to multiple instances. But, you can connect multiple EBS volumes to a single instance.47) What is the use of lifecycle hooks in Autoscaling?Lifecycle hooks perform custom actions by pausing instances when Autoscaling group launches or terminates an instance. When instance is paused, an instance moves in a wait state. By default, an instance remains in a wait state for 1 hour. For example, when you launch a new instance, lifecycle hooks pauses an instance. When you pause an instance, you can install a software on it or make sure that an instance is completely ready to receive the traffic.48) What is Amazon Kinesis Firehose?An Amazon Kinesis Firehose is a web service used to deliver real-time streaming data to destinations such as Amazon Simple Storage Service, Amazon Redshift, etc. To know more about Amazon Kinesis Firehose, click on the link given below:Click Here49) What is the use of Amazon Transfer Acceleration Service?An Amazon Transfer Acceleration Service is a service that enables fast and secure transfer of data between your client and S3 bucket. To know more about Amazon Transfer Acceleration Service, click on the link given below:Click Here50) How will you access the data on EBS in AWS?EBS stands for Elastic Block Store. It is a virtual disk in a cloud that creates the storage volume and attach it to the EC2 instances. It can run the databases as well as can store the files. All the files that it store can be mounted as a file system which can be accessed directly. To know more about EBS, click on the link given below:Click Here51) Differences between horizontal scaling and vertical scaling?Vertical scaling means scaling the compute power such as CPU, RAM to your existing machine while horizontal scaling means adding more machines to your server or database. Horizontal scaling means increasing the number of nodes, and distributing the tasks among different nodes.
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
A list of top frequently asked QA Interview Questions or Quality Assurance Interview Questions and answers are given below.1) What is Quality Assurance?QA stands Quality Assurance. QA is a set of activities designed to ensure that the developed software meets all the specifications or requirements mentioned in the SRS document.QA follows the PDCA cycles:PlanThe plan is a phase in Quality Assurance in which the organization determines the processes which are required to build a high-quality software product.Play VideoxDoDo is a phase of development and testing the processes.CheckThis phase is used for monitoring the processes and verifies whether these processes meet the user requirements or not.ActThe Act is a phase for implementing the actions required to improve the processes.2) What is the difference between Quality Assurance and Software testing?The following is the list of differences between Quality Assurance and Software testing:Quality AssuranceSoftware testingActivitiesQuality Assurance is a set of activities used to ensure that the developed software meets all the user requirements.Software testing is an activity performed after the development phase to check whether the actual results match the expected results to ensure that the software is bug-free. In short, we can say that software testing is verification of application under test.ActivitiesIt involves activities that include the implementation of processes, procedures, and standards.It involves activities that include verification of testing.OrientationIt is a process-oriented, i.e., it checks the processes to ensure that quality software is delivered to the client.It is a product-oriented, i.e., checking the functionality of a software.Activity typePreventiveCorrectiveObjectiveThe main objective of Quality Assurance is to deliver quality software.The main objective of software testing is to find the bugs in the developed software.3) How the build and release differ from one another, write down the difference between build and release?Build is defined as when the software is given to the testing team by the development team.Release It is defined as when the software is handed over to the users by the tester and developer.4) Define bug leakage and bug release?Bug leakage is defined as the bug not found by the testing team but found by the end users. Bug release it is defined when the software is released by the tester in the market knowing that bug is present in the release. These types of bugs have low priority and severity. This type of situation arises when customers want the software on time than the delay in getting the software and the cost involved in correcting the bugs.5) What are the solutions for the software development problem?There are five different solutions for the software development problem.The requirements for software development should be clear, complete, and agreed by all, setting up the requirements criteria.Next thing is the realistic schedule like time for planning, designing, testing, fixing bugs, and re-testing.It requires sufficient testing, starts the testing immediately after one or more module development.Use of group communication tools.Use rapid prototype during the design phase so that it can be easy for the customer to find what to expect.6) Explain the types of documents in Software Quality Assurance?The following are the types of documents in Software Quality Assurance:Requirement DocumentAll the functionalities are to be added in the application are documented in terms of Requirements, and the document is known as Requirement document. This Requirement document is made by the collaboration of various people in the project team like developers, testers, Business Analysts, etc.Test MetricsTest Metrics is a quantitative measure that determines the quality and effectiveness of the testing process.Test planIt defines the strategy which will be applied to test an application, the resources that will be used, the test environment in which testing will be performed, and scheduling of test activities will be done.Test casesA test case is a set of steps, and conditions used at the time of testing. This activity is performed to verify whether all the functionalities of software are working properly or not. There can be various types of test cases such as logical, functional, error, negative test cases, physical test cases, UI test cases, etc.Traceability matrixTraceability matrix is a table that traces and maps the user requirements with test cases. The main aim of Requirement Traceability Matrix is to see that all test cases are covered so that no functionality miss during the software testing.Test scenarioA test scenario is a collection set of test cases which helps the testing team to determine the positive and negative aspects of a project.7) What is the rule of a "Test Driven Development"?In Test Driven Development, test cases are prepared before writing the actual code. It means you have to write the test case before the real development of the application.Test Driven Development cycle:Write the test casesExecute the test casesIf the test case fails, then changes are made to make it correctRepeat the process8) What is traceability matrix?Traceability matrix is a document that maps and traces user requirements with test cases. The main aim of Requirement Traceability Matrix is to see that all test cases are covered so that no functionality miss during the software testing.9) Write down the differences between the responsibilities of QA and programmers?Differences in responsibilities are as:Sr. No.QA ResponsibilityProgrammer Responsibility1.QA team is concerned for process QualityProgrammers are concerned for product quality2.QA ensures that the processes used for developing the product of high qualityProgrammers used these processes so that the end product is of good qualityAny issue found during the execution of the process by the programmers is communicated to the QA so that they can improve the process.10) What is the difference between Verification and Validation?VerificationValidationVerification is the process of evaluating the steps during the development phase to determine whether they meet the user requirements or not.Validation is the process of evaluating the product after the development process to determine whether it meets the specified requirement.Verification is static testing.Validation is dynamic testing.Verification testing is performed before validation.Validation is performed after verification.It does not involve in executing the code.It involves in executing the code.It involves activities such as reviews, walkthroughs, inspections, and desk checking, etc.It involves methods such as black box testing, white box testing and non-functional testing.It finds the bugs before the development cycle.It finds the bugs after the development cycle.It conforms to the requirements specified in the SRS document.It checks whether it meets the specified requirements or not.QA team performs verification in which they verify that the software is according to the requirements specified in the SRS document.Software tester performs testing of a product.11) Define the key challenges faced during software testing?The application should be stable for testing.Testing should be under a time constraint.Which tests should execute first?Testing the complete application.Regression TestingLack of skilled testers.Changing requirements.Lack of resources, training, and tools.12) What is the difference between Retesting and Regression testing?RegressionRetestingRegression is a type of testing used to verify whether the new changes in the code have affected the unchanged features or not.Retesting is the testing of modules that have been failed in the last execution.The main aim of Regression testing is that any changes made in the code should not affect the existing functionalities.Retesting is the testing which is performed on the defects that have been fixed.It is generic testing as it can be performed at any time whenever the changes made in the code.It is planned testing.It is performed on the test cases that have been passed.It is performed on the test cases that have been failed.Automation can be done for regression testing, while manual testing will be expensive and time consuming.To perform the Retesting, we cannot automate the test cases.Defect verification does not come under the Regression testing.Defect verification comes under the Retesting.Based on the availability of resources, regression testing is performed in parallel with the retesting.The priority of retesting is more than the regression testing, so it always performed before the regression testing.13) Define the role of QA in Software Development?QA stands for Quality Assurance. QA team persuades the quality by monitoring the whole development process. QA tracks the outcome and adjusting processes to meet the expectation.Role of Quality Assurance are:QA team is responsible for monitoring the process to be carried out for development.Responsibilities of the QA team are planning, testing, execution process.QA Lead creates the time table and agrees on a Quality Assurance plan for the product.QA team communicated the QA process to the team members.QA team ensures traceability of test cases to requirements.14) Describe the dimensions of the risk in QA?The dimensions of the risk are:Schedule: Unrealistic Schedules, to develop a huge software in a single day.Client: Ambiguous requirements definition, requirements are not clear, changes in requirement.Human Resource: Non - availability of sufficient resources with the skill level expected in the project.System Resources: Non-availability of acquiring all critical resources, either hardware and software tools or license for software will have an adverse effect.Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes to the requirement will affect the quality of the product tested.15) What is the test ware?Test ware is a term used to describe all the materials used to perform the test. Test ware includes test plans, test cases, test data, and any other items needed to perform and design a test.16) What is Monkey testing?Monkey testing is a type of black box testing used to test the application by providing random inputs to check the system behavior such as to check the system, whether it is crashing or not.This type of testing is performed automatically whenever the user provides the random inputs to check the system behaviorThere is no need to create test cases to perform monkey testing.It can also be automated, i.e., we can write the programs or scripts to generate random inputs to check the system behavior.This technique is useful when we are performing stress or load testing.There are two types of monkeys:Smart monkeysDumb monkeysSmart MonkeysSmart monkeys are those which have a brief idea about the application.They know that where the pages of an application will redirect to which page.They also know that the inputs that they are providing are valid or invalid.If they find any error, then they are smart enough to file a bug.They also know that what are the menus and buttons.Dumb MonkeysDumb Monkeys are those which have no idea about the application.They do not know about the pages of an application will redirect to.They provide random inputs, and they do not know about the starting and ending point of the application.They do not know much about the application, but still, they find bugs such as environmental failure or hardware failure.They also do not know much about the functionality and UI of an application.17) Write the differences between Preventive and Reactive approaches?Preventive Approach: It is also known as the Verification process. Preventive is the approach to prevent defects. In this approach, tests are designed in its early stages of Software Development Lifecycle before the software has developed. In this approach, testers try to prevent defects in the early stages; it comes under Quality Analysis.Reactive Approach: It is also known as Validation Process. This approach is to identify defects. In this approach, tests are designed to execute after the software's development. In this approach, we try to find out the defects. It comes under Quality Control.18) What is the Quality Audit?An Audit is defined as on-site verification activity, such as inspection or examination, of a processor quality system. Quality Audit is the process of systematic analysis of a quality system carried out by an internal or external quality auditor, or an audit team. Quality Audits are performed at predefined time intervals and ensure that the institution has clearly defined internal system monitoring procedures linked to effective action. Audits are an essential management tool to be used for verifying objective evidence of processes.19) What is a test plan?The Test Plan document is a document which contains the plan for all the testing activities to deliver a quality product. The test Plan document is derived from many activities such as product description, SRS, or Use Case documents for all future events of the project. The Test Lead usually prepares it, or Test manager and the focus of the document is to describe what to test, how to test when to test, who will do what test.20) How do you decide when you have tested enough?This is one of the most crucial questions. As a project manager or project lead, sometimes we might face a situation to call off the testing to release the product early. In those cases, we have to decide whether the testers have tested the product enough or not.There are many factors involved in real-time projects to decide when to stop testing:If we reach Testing deadlines or release deadlinesBy entering the decided pass percentage of test cases.In the real-time project, if the risk in the project is under the acceptable limit.If all the high priority bugs and blockers have been fixed.If we meet the acceptance criteria.21) How to design test cases?There are mainly two techniques to design the test cases:Black box testingIt is a specification-based technique where the testers view the software as a black box with inputs and outputs.In black box testing, the testers do not know about how the software is structured inside the box, they know only what the software does but do not know how the software does.This type of technique is valid for all the levels of testing where the specification exists.White box testingWhite box testing is a testing technique that evaluates the internal logic and structure of the code.In order to impement the white box testing, the testers should have the knowledge of coding so that they can deal with the internal code. They look into the internal code and finds out the unit which is malfunctioning.22) What is adhoc testing?Adhoc testing is an informal way of testing the software. It does not follow the formal process like requirement documents, test plan, test cases, etc.Characteristics of adhoc testing are:Adhoc testing is performed after the completion of formal testing on an application.The main aim of adhoc testing is to break the application without following any process.The testers who are executing the adhoc testing should have a deep knowledge of a product.23) How is monkey testing different from adhoc testing?Both monkey testing and adhoc testing follows the informal approach, but in monkey testing, we do not need to have deep knowledge of the software. However, to perform adhoc testing, testers should have a deep knowledge of the software.24) How is adhoc testing different from exploratory testing?The following is the list of differences between adhoc testing and exploratory testing:Adhoc testingExploratory testingAdhoc testing is the testing of software without any documentation or requirements specification.knowledge about the software while exploring the application.Documentation is not required.Documentation is mandatory in exploratory testing.The main aim of adhoc testing is to achieve perfection in testing.The main aim of exploratory testing is to learn the application.It is an informal approach.It is a formal approach.Adhoc testing does not require an expert testing engineer.Exploratory testing does not require an expert testing engineer.25) What are the different levels in software testing?There are four different levels in software testing:Unit/Component testingIntegration testingSystem testingAcceptance testingUnit testingIt is the lowest level in most of the models.Units are the programs or modules in the software.Unit testing is performed by the programmer that tests the modules, and if any bug is found, then it is fixed instantaneously.Integration testingIntegration means the combination of all the modules, and all these modules are tested as a group.Integration testing performs the testing on the data that flows from one module to another module.It basically checks the communication between two or more modules but not the functionality of individual modules.System testingSystem testing is used to test the complete or integrated system.It tests the software to ensure that it conforms the specified requirements specified in the SRS document.It is the final test and performs both functional and non-functional testing.Acceptance testingAcceptance testing is performed by the users or customers to check whether it meets their requirements or not.26) What is a bug life cycle?The bug life cycle is also known as the defect life cycle. Bug life cycle is a specific set of states that a bug goes through. The number of states that a defect goes through varies from project to project.NewWhen a new defect is logged and posted for the first time, then the status is assigned as New.AssignedOnce the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developing team.OpenThe developer starts analyzing and works on the defect fix.FixedWhen a developer makes a necessary code changes and verifies the change, then he/she can make the bug status as fixed.RetestTester does the retesting of the code at this stage to check whether the defect is fixed by the developer or not and change the status to retest.ReopenIf the bug persists even after the developer has fixed the bug, then tester changes the status to Reopen and once again bug goes through the bug life cycle.VerifiedThe tester retests the bug after it got fixed by the developer if no bug found then it changes the status to Verified.ClosedIf the bug is no longer exists, then it changes the status to Closed.DuplicateIf the defect is repeated twice or the defect corresponds to the same concept of the previous bug, then it changes the status to Duplicate.RejectedIf the developer feels that the defect is not a genuine defect, then it changes the status to Rejected.DeferredIf the bug is not of higher priority and can be solved in the next release, then the status changes to Deferred.
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
A list of top frequently asked React Interview Questions and Answers are given below.General React Interview Questions1) What is React?React is a declarative, efficient, flexible open source front-end JavaScript library developed by Facebook in 2011. It follows the component-based approach for building reusable UI components, especially for single page application. It is used for developing interactive view layer of web and mobile apps. It was created by Jordan Walke, a software engineer at Facebook. It was initially deployed on Facebook's News Feed section in 2011 and later used in its products like WhatsApp & Instagram.For More Information, Click here.2) What are the features of React?React framework gaining quick popularity as the best framework among web developers. The main features of React are:Play VideoJSXComponentsOne-way Data BindingVirtual DOMSimplicityPerformanceFor More Information, Click here.3) What are the most crucial advantages of using React?Following is a list of the most crucial advantages of using React:React is easy to learn and useReact comes with good availability of documentation, tutorials, and training resources. It is easy for any developer to switch from JavaScript background to React and easily understand and start creating web apps using React. Anyone with little knowledge of JavaScript can start building web applications using React.React follows the MVC architecture.React is the V (view part) in the MVC (Model-View-Controller) architecture model and is referred to as "one of the JavaScript frameworks." It is not fully featured but has many advantages of the open-source JavaScript User Interface (UI) library, which helps execute the task in a better manner.React uses Virtual DOM to improve efficiency.React uses virtual DOM to render the view. The virtual DOM is a virtual representation of the real DOM. Each time the data changes in a react app, a new virtual DOM gets created. Creating a virtual DOM is much faster than rendering the UI inside the browser. Therefore, with the use of virtual DOM, the efficiency of the app improves. That's why React provides great efficiency.Creating dynamic web applications is easy.In React, creating a dynamic web application is much easier. It requires less coding and gives more functionality. It uses JSX (JavaScript Extension), which is a particular syntax letting HTML quotes and HTML tag syntax to render particular subcomponents.React is SEO-friendly.React facilitates a developer to develop an engaging user interface that can be easily navigated in various search engines. It also allows server-side rendering, which is also helpful to boost the SEO of your app.React allows reusable components.React web applications are made up of multiple components where each component has its logic and controls. These components provide a small, reusable piece of HTML code as an output that can be reused wherever you need them. The code reusability helps developers to make their apps easier to develop and maintain. It also makes the nesting of the components easy and allows developers to build complex applications of simple building blocks. The reuse of components also increases the pace of development.Support of handy toolsReact provides a lot of handy tools that can make the task of the developers understandable and easier. Use these tools in Chrome and Firefox dev extension, allowing us to inspect the React component hierarchies in the virtual DOM. It also allows us to select the particular components and examine and edit their current props and state.React has a rich set of libraries.React has a huge ecosystem of libraries and provides you the freedom to choose the tools, libraries, and architecture for developing the best application based on your requirement.Scope for testing the codesReact web applications are easy to test. These applications provide a scope where the developer can test and debug their codes with the help of native tools.For More Information, Click here.4) What are the biggest limitations of React?Following is the list of the biggest limitations of React:React is just a library. It is not a complete framework.It has a huge library which takes time to understand.It may be difficult for the new programmers to understand and code.React uses inline templating and JSX, which may be difficult and act as a barrier. It also makes the coding complex.5) What is JSX?JSX stands for JavaScript XML. It is a React extension which allows writing JavaScript code that looks similar to HTML. It makes HTML file easy to understand. The JSX file makes the React application robust and boosts its performance. JSX provides you to write XML-like syntax in the same file where you write JavaScript code, and then preprocessor (i.e., transpilers like Babel) transform these expressions into actual JavaScript code. Just like XML/HTML, JSX tags have a tag name, attributes, and children.Exampleclass App extends React.Component { render() { return( Hello JavaTpoint ) } } In the above example, text inside tag return as JavaScript function to the render function. After compilation, the JSX expression becomes a normal JavaScript function, as shown below.React.createElement("h1", null, "Hello JavaTpoint"); For More Information, Click here.6) Why can't browsers read JSX?Browsers cannot read JSX directly because they can only understand JavaScript objects, and JSX is not a regular JavaScript object. Thus, we need to transform the JSX file into a JavaScript object using transpilers like Babel and then pass it to the browser.7) Why we use JSX?It is faster than regular JavaScript because it performs optimization while translating the code to JavaScript.Instead of separating technologies by putting markup and logic in separate files, React uses components that contain both.t is type-safe, and most of the errors can be found at compilation time.It makes easier to create templates.8) What do you understand by Virtual DOM?A Virtual DOM is a lightweight JavaScript object which is an in-memory representation of real DOM. It is an intermediary step between the render function being called and the displaying of elements on the screen. It is similar to a node tree which lists the elements, their attributes, and content as objects and their properties. The render function creates a node tree of the React components and then updates this node tree in response to the mutations in the data model caused by various actions done by the user or by the system.9) Explain the working of Virtual DOM.Virtual DOM works in three steps:1. Whenever any data changes in the React App, the entire UI is re-rendered in Virtual DOM representation.2. Now, the difference between the previous DOM representation and the new DOM is calculated.3. Once the calculations are completed, the real DOM updated with only those things which are changed.10) How is React different from Angular?The React is different from Angular in the following ways.AngularReactAuthorGoogleFacebook CommunityDeveloperMisko HeveryJordan WalkeInitial ReleaseOctober 2010March 2013LanguageJavaScript, HTMLJSXTypeOpen Source MVC FrameworkOpen Source JS FrameworkRenderingClient-SideServer-SideData-BindingBi-directionalUni-directionalDOMRegular DOMVirtual DOMTestingUnit and Integration TestingUnit TestingApp ArchitectureMVCFluxPerformanceSlowFast, due to virtual DOM.For More Information, Click here.11) How React's ES6 syntax is different from ES5 syntax?The React's ES6 syntax has changed from ES5 syntax in the following aspects.require vs. Import// ES5 var React = require('react'); // ES6 import React from 'react'; exports vs. export// ES5 module.exports = Component; // ES6 export default Component; component and function// ES5 var MyComponent = React.createClass({ render: function() { return( Hello JavaTpoint ); } }); // ES6 class MyComponent extends React.Component { render() { return( Hello Javatpoint ); } } props// ES5 var App = React.createClass({ propTypes: { name: React.PropTypes.string }, render: function() { return( Hello, {this.props.name}! ); } }); // ES6 class App extends React.Component { render() { return( Hello, {this.props.name}! ); } } statevar App = React.createClass({ getInitialState: function() { return { name: 'world' }; }, render: function() { return( Hello, {this.state.name}! ); } }); // ES6 class App extends React.Component { constructor() { super(); this.state = { name: 'world' }; } render() { return( Hello, {this.state.name}! ); } } 12) What is the difference between ReactJS and React Native?The main differences between ReactJS and React Native are given below.SNReactJSReact Native1.Initial release in 2013.Initial release in 2015.2.It is used for developing web applications.It is used for developing mobile applications.3.It can be executed on all platforms.It is not platform independent. It takes more effort to be executed on all platforms.4.It uses a JavaScript library and CSS for animations.It comes with built-in animation libraries.5.It uses React-router for navigating web pages.It has built-in Navigator library for navigating mobile applications.6.It uses HTML tags.It does not use HTML tags.7.In this, the Virtual DOM renders the browser code.In this, Native uses its API to render code for mobile applications.For More Information, Click here.13) What is the difference between Real DOM and Virtual DOM?The following table specifies the key differences between the Real DOM and Virtual DOM:The real DOM creates a new DOM if the element updates.Real DOMVirtual DOMThe real DOM updates slower.The virtual DOM updates faster.The real DOM can directly update HTML.The virtual DOM cannot directly update HTML.The virtual DOM updates the JSX if the element updates.In real DOM, DOM manipulation is very expensive.In virtual DOM, DOM manipulation is very easy.There is a lot of memory wastage in The real DOM.There is no memory wastage in the virtual DOM.React Component Interview Questions14) What do you understand from "In React, everything is a component."In React, components are the building blocks of React applications. These components divide the entire React application's UI into small, independent, and reusable pieces of code. React renders each of these components independently without affecting the rest of the application UI. Hence, we can say that, in React, everything is a component.15) Explain the purpose of render() in React.It is mandatory for each React component to have a render() function. Render function is used to return the HTML which you want to display in a component. If you need to rendered more than one HTML element, you need to grouped together inside single enclosing tag (parent tag) such as , , etc. This function returns the same result each time it is invoked.Example: If you need to display a heading, you can do this as below.import React from 'react' class App extends React.Component { render (){ return ( Hello World ) } } export default App Points to Note:Each render() function contains a return statement.The return statement can have only one parent HTML tag.16) How can you embed two or more components into one?You can embed two or more components into the following way:import React from 'react' class App extends React.Component { render (){ return ( Hello World ) } } class Example extends React.Component { render (){ return ( Hello JavaTpoint ) } } export default App 17) What is Props?Props stand for "Properties" in React. They are read-only inputs to components. Props are an object which stores the value of attributes of a tag and work similar to the HTML attributes. It gives a way to pass data from the parent to the child components throughout the application.It is similar to function arguments and passed to the component in the same way as arguments passed in a function.Props are immutable so we cannot modify the props from inside the component. Inside the components, we can add attributes called props. These attributes are available in the component as this.props and can be used to render dynamic data in our render method.For More Information, Click here.18) What is a State in React?The State is an updatable structure which holds the data and information about the component. It may be changed over the lifetime of the component in response to user action or system event. It is the heart of the react component which determines the behavior of the component and how it will render. It must be kept as simple as possible.Let's create a "User" component with "message state."import React from 'react' class User extends React.Component { constructor(props) { super(props) this.state = { message: 'Welcome to JavaTpoint' } } render() { return ( {this.state.message} ) } } export default User For More Information, Click here.19) Differentiate between States and Props.The major differences between States and Props are given below.SNPropsState1.Props are read-only.State changes can be asynchronous.2.Props are immutable.State is mutable.3.Props allow you to pass data from one component to other components as an argument.State holds information about the components.4.Props can be accessed by the child component.State cannot be accessed by child components.5.Props are used to communicate between components.States can be used for rendering dynamic changes with the component.6.The stateless component can have Props.The stateless components cannot have State.7.Props make components reusable.The State cannot make components reusable.8.Props are external and controlled by whatever renders the component.The State is internal and controlled by the component itself.For More Information, Click here.20) How can you update the State of a component?We can update the State of a component using this.setState() method. This method does not always replace the State immediately. Instead, it only adds changes to the original State. It is a primary method which is used to update the user interface(UI) in response to event handlers and server responses.Exampleimport React, { Component } from 'react'; import PropTypes from 'prop-types'; class App extends React.Component { constructor() { super(); this.state = { msg: "Welcome to JavaTpoint" }; this.updateSetState = this.updateSetState.bind(this); } updateSetState() { this.setState({ msg:"Its a best ReactJS tutorial" }); } render() { return ( {this.state.msg} ) functions.//General way render() { return( ); } 23) What is an event in React?An event is an action which triggers as a result of the user action or system generated event like a mouse click, loading of a web page, pressing a key, window resizes, etc. In React, the event handling system is very similar to handling events in DOM elements. The React event handling system is known as Synthetic Event, which is a cross-browser wrapper of the browser's native event.Handling events with React have some syntactical differences, which are:React events are named as camelCase instead of lowercase.With JSX, a function is passed as the event handler instead of a string.For More Information, Click here.24) How do you create an event in React?We can create an event as follows.class Display extends React.Component({ show(msgEvent) { // code }, render() { // Here, we render the div with an onClick prop return ( 31) Explain the lifecycle methods of React components in detail.The important React lifecycle methods are:getInitialState(): It is used to specify the default value of this.state. It is executed before the creation of the component.componentWillMount(): It is executed before a component gets rendered into the DOM.componentDidMount(): It is executed when the component gets rendered and placed on the DOM. Now, you can do any DOM querying operations.componentWillReceiveProps(): It is invoked when a component receives new props from the parent class and before another render is called. If you want to update the State in response to prop changes, you should compare this.props and nextProps to perform State transition by using this.setState() method.shouldComponentUpdate(): It is invoked when a component decides any changes/updation to the DOM and returns true or false value based on certain conditions. If this method returns true, the component will update. Otherwise, the component will skip the updating.componentWillUpdate(): It is invoked before rendering takes place in the DOM. Here, you can't change the component State by invoking this.setState() method. It will not be called, if shouldComponentUpdate() returns false.componentDidUpdate(): It is invoked immediately after rendering takes place. In this method, you can put any code inside this which you want to execute once the updating occurs.componentWillUnmount(): It is invoked immediately before a component is destroyed and unmounted permanently. It is used to clear up the memory spaces such as invalidating timers, event listener, canceling network requests, or cleaning up DOM elements. If a component instance is unmounted, you cannot mount it again.For More Information, Click here.32) What are Pure Components?Pure components introduced in React 15.3 version. The React.Component and React.PureComponent differ in the shouldComponentUpdate() React lifecycle method. This method decides the re-rendering of the component by returning a boolean value (true or false). In React.Component, shouldComponentUpdate() method returns true by default. But in React.PureComponent, it compares the changes in state or props to re-render the component. The pure component enhances the simplicity of the code and performance of the application.33) What are Higher Order Components(HOC)?In React, Higher Order Component is an advanced technique for reusing component logic. It is a function that takes a component and returns a new component. In other words, it is a function which accepts another function as an argument. According to the official website, it is not the feature(part) in React API, but a pattern that emerges from React's compositional nature.For More Information, Click here.34) What can you do with HOC?You can do many tasks with HOC, some of them are given below:Code ReusabilityProps manipulationState manipulationRender highjacking35) What is the difference between Element and Component?The main differences between Elements and Components are:SNElementComponent1.An element is a plain JavaScript object which describes the component state and DOM node, and its desired properties.A component is the core building block of React application. It is a class or function which accepts an input and returns a React element.2.It only holds information about the component type, its properties, and any child elements inside it.It can contain state and props and has access to the React lifecycle methods.3.It is immutable.It is mutable.4.We cannot apply any methods on elements.We can apply methods on components.5.Example:const element = React.createElement('div',{id: 'login-btn'},'Login')Example:function Button ({ onLogin }) {return React.createElement('div',{id: 'login-btn', onClick: onLogin},'Login')}36) How to write comments in React?In React, we can write comments as we write comments in JavaScript. It can be in two ways:1. Single Line Comments: We can write comments as /* Block Comments */ with curly braces:{/* Single Line comment */} 2. Multiline Comments: If we want to comment more that one line, we can do this as{ /* Multi line comment */ } 37) Why is it necessary to start component names with a capital letter?In React, it is necessary to start component names with a capital letter. If we start the component name with lower case, it will throw an error as an unrecognized tag. It is because, in JSX, lower case tag names are considered as HTML tags.38) What are fragments?In was introduced in React 16.2 version. In React, Fragments are used for components to return multiple elements. It allows you to group a list of multiple children without adding an extra node to the DOM.Examplerender() { return ( ) } There is also a shorthand syntax exists for declaring Fragments, but it's not supported in many tools:render() { return ( ) } For More Information, Click here.39) Why are fragments better than container divs?Fragments are faster and consume less memory because it did not create an extra DOM node.Some CSS styling like CSS Grid and Flexbox have a special parent-child relationship and add tags in the middle, which makes it hard to keep the desired layout.The DOM Inspector is less cluttered.40) How to apply validation on props in React?Props validation is a tool which helps the developers to avoid future bugs and problems. It makes your code more readable. React components used special property PropTypes that help you to catch bugs by validating data types of values passed through props, although it is not necessary to define components with propTypes.We can apply validation on props using App.propTypes in React component. When some of the props are passed with an invalid type, you will get the warnings on JavaScript console. After specifying the validation patterns, you need to set the App.defaultProps.class App extends React.Component { render() {} } Component.propTypes = { /*Definition */}; For More Information, Click here.41) What is create-react-app?Create React App is a tool introduced by Facebook to build React applications. It provides you to create single-page React applications. The create-react-app are preconfigured, which saves you from time-consuming setup and configuration like Webpack or Babel. You need to run a single command to start the React project, which is given below.$ npx create-react-app my-app This command includes everything which we need to build a React app. Some of them are given below:It includes React, JSX, ES6, and Flow syntax support.It includes Autoprefixed CSS, so you don't need -webkit- or other prefixes.It includes a fast, interactive unit test runner with built-in support for coverage reporting.It includes a live development server that warns about common mistakes.It includes a build script to bundle JS, CSS, and images for production, with hashes and source maps.For More Information, Click here.42) How can you create a component in React?There are two possible ways to create a component in React:Function Components: This is the simplest way to create a component in React. These are the pure JavaScript functions that accept props object as the first parameter and return React elements:function Greeting({ message }) { return {`Hello, ${message}`}h1> } Class Components: The class components method facilitates you to use ES6 class to define a component. The above function component can be written as:class Greeting extends React.Component { render() { return {`Hello, ${this.props.message}`}h1> } } 43) When do we prefer to use a class component over a function component?If a component needs state or lifecycle methods, we should use the class component; otherwise, use the function component. However, after React 16.8, with the addition of Hooks, you could use state, lifecycle methods, and other features that were only available in the class component right in your function component.44) Is it possible for a web browser to read JSX directly?Web browsers can't read JSX directly. This is because the web browsers are built to read the regular JS objects only, and JSX is not a regular JavaScript object.If you want a web browser to read a JSX file, you must transform the files into a regular JavaScript object. For this purpose, Babel is used.45) What do you understand by the state in React?In react, the state of a component is an object that holds some information that may change over the component's lifetime. It would be best to try to make your state as simple as possible and minimize the number of stateful components.Let's see how to create a user component with message state:class User extends React.Component { constructor(props) { super(props) this.state = { message: 'Welcome to React world' } } render() { return ( {this.state.message}h1> div> ) } } The state is very similar to props, but it is private and fully controlled by the component. i.e., It is not accessible to any other component till the owner component decides to pass it.46) What are the main changes that appear in React's ES6 syntax compared to ES5 syntax?/How different is React's ES6 syntax compared to ES5?Following are the most visible syntax we can see while comparing ES6 and ES5:require vs importSyntax in ES5:var React = require('react'); Syntax in ES6:import React from 'react'; export vs exportsSyntax in ES5:module.exports = Component; Syntax in ES6:export default Component; component and functionSyntax in ES5:var MyComponent = React.createClass({ render: function() { return Hello JavaTpoint!h3> ; } }); Syntax in ES6:class MyComponent extends React.Component { render() { return Hello JavaTpoint!h3> ; } } propsSyntax in ES5:var App = React.createClass({ propTypes: { name: React.PropTypes.string }, render: function() { return Hello, {this.props.name}!h3> ; } }); Syntax in ES6:class App extends React.Component { render() { return Hello, {this.props.name}!h3> ; } } stateSyntax in ES5:var App = React.createClass({ getInitialState: function() { return { name: 'world' }; }, render: function() { return Hello, {this.state.name}!h3> ; } }); Syntax in ES6:class App extends React.Component { constructor() { super(); this.state = { name: 'world' }; } render() { return Hello, {this.state.name}!h3> ; } } 47) What do you understand by props in React?In React, the props are inputs to components. They are single values or objects containing a set of values passed to components on creation using a naming convention similar to HTML-tag attributes. They are data passed down from a parent component to a child component.The main purpose of props in React is to provide the following component functionality:Pass custom data to your component.Trigger state changes.Use via this.props.reactProp inside component's render() method.For example, let us create an element with reactProp property: This reactProp name becomes a property attached to React's native props object, which already exists on all React library components.props.reactProp React Refs Interview Questions48) What do you understand by refs in React?Refs is the shorthand used for references in React. It is an attribute which helps to store a reference to particular DOM nodes or React elements. It provides a way to access React DOM nodes or React elements and how to interact with it. It is used when we want to change the value of a child component, without making the use of props.For More Information, Click here.49) How to create refs?Refs can be created by using React.createRef() and attached to React elements via the ref attribute. It is commonly assigned to an instance property when a component is created, and then can be referenced throughout the component.class MyComponent extends React.Component { constructor(props) { super(props); this.callRef = React.createRef(); } render() { return this.callRef} />; ( { e.preventDefault(); console.log(inputRef.current.value); }; render() { return ( this.handleSubmit(e)}> Submit ); } } export default App; For More Information, Click here.51) Which is the preferred option callback refs or findDOMNode()?The preferred option is to use callback refs over findDOMNode() API. Because callback refs give better control when the refs are set and unset whereas findDOMNode() prevents certain improvements in React in the future.class MyComponent extends Component { componentDidMount() { findDOMNode(this).scrollIntoView() } render() { return } } The recommended approach is:class MyComponent extends Component { componentDidMount() { this.node.scrollIntoView() } render() { return this.node = node} /> } } class MyComponent extends Component { componentDidMount() { this.node.scrollIntoView() } render() { return this.node = node} /> } } 52) What is the use of Refs?The Ref in React is used in the following cases:It is used to return a reference to the element.It is used when we need DOM measurements such as managing focus, text selection, or media playback.It is used in triggering imperative animations.It is used when integrating with third-party DOM libraries.It can also use as in callbacks.For More Information, Click here.React Router Interview Questions53) What is React Router?React Router is a standard routing library system built on top of the React. It is used to create Routing in the React application using React Router Package. It helps you to define multiple routes in the app. It provides the synchronous URL on the browser with data that will be displayed on the web page. It maintains the standard structure and behavior of the application and mainly used for developing single page web applications.For More Information, Click here.54) Why do we need a Router in React?React Router plays an important role to display multiple views in a single page application. It is used to define multiple routes in the app. When a user types a specific URL into the browser, and if this URL path matches any 'route' inside the router file, the user will be redirected to that particular Route. So, we need to add a Router library to the React app, which allows creating multiple routes with each leading to us a unique view. React Router Example switch Click on this button ); } 64) What are the rules you should follow for the hooks in React?We have to follow the following two rules to use hooks in React:You should call hooks only at the top level of your React functions and not inside the loops, conditions, or nested functions. This is used to ensure that hooks are called in the same order each time a component renders, and it also preserves the state of hooks between multiple useState and useEffect calls.You should call hooks from React functions only. Don't call hooks from regular JavaScript functions.65) What are forms in React?In React, forms are used to enable users to interact with web applications. Following is a list of the most common usage of forms in React:Forms facilitate users to interact with the application. By using forms, the users can communicate with the application and enter the required information whenever required.Forms contain certain elements, such as text fields, buttons, checkboxes, radio buttons, etc., that can make the application more interactive and beautiful.Forms are the best possible way to take inputs from the users.Forms are used for many different tasks such as user authentication, searching, filtering, indexing, etc.66) What is an error boundary or error boundaries?An error boundary is a concept introduced in version 16 of React. Error boundaries provide a way to find out the errors that occur in the render phase. Any component which uses one of the following lifecycle methods is considered an error boundary. Let's see the places where an error boundary can detect an error:Render phaseInside a lifecycle methodInside the constructorLet's see an example to understand it better:Without using error boundaries:class CounterComponent extends React.Component{ constructor(props){ super(props); this.state = { counterValue: 0 } this.incrementCounter = this.incrementCounter.bind(this); } incrementCounter(){ this.setState(prevState => counterValue = prevState+1); } render(){ if(this.state.counter === 2){ throw new Error('Crashed'); } return( 67) In which cases do error boundaries not catch errors?Following are some cases in which error boundaries don't catch errors:Error boundaries don't catch errors inside the event handlers.During the server-side rendering.In the case when errors are thrown in the error boundary code itself.Asynchronous code using setTimeout or requestAnimationFrame callbacks.React Redux Interview Questions68) What were the major problems with MVC framework?The major problems with the MVC framework are:DOM manipulation was very expensive.It makes the application slow and inefficient.There was a huge memory wastage.It makes the application debugging hard.69) Explain the Flux concept.Flux is an application architecture that Facebook uses internally for building the client-side web application with React. It is neither a library nor a framework. It is a kind of architecture that complements React as view and follows the concept of Unidirectional Data Flow model. It is useful when the project has dynamic data, and we need to keep the data updated in an effective manner.For More Information, Click here.70) What is Redux?Redux is an open-source JavaScript library used to manage application state. React uses Redux for building the user interface. The Redux application is easy to test and can run in different environments showing consistent behavior. It was first introduced by Dan Abramov and Andrew Clark in 2015.React Redux is the official React binding for Redux. It allows React components to read data from a Redux Store, and dispatch Actions to the Store to update data. Redux helps apps to scale by providing a sensible way to manage state through a unidirectional data flow model. React Redux is conceptually simple. It subscribes to the Redux store, checks to see if the data which your component wants have changed, and re-renders your component.For More Information, Click here.71) What are the three principles that Redux follows?The three principles that redux follows are:Single source of truth: The State of your entire application is stored in an object/state tree inside a single Store. The single State tree makes it easier to keep changes over time. It also makes it easier to debug or inspect the application.The State is read-only: There is only one way to change the State is to emit an action, an object describing what happened. This principle ensures that neither the views nor the network callbacks can write directly to the State.Changes are made with pure functions: To specify how actions transform the state tree, you need to write reducers (pure functions). Pure functions take the previous State and Action as a parameter and return a new State.72) List down the components of Redux.The components of Redux are given below.STORE: A Store is a place where the entire State of your application lists. It is like a brain responsible for all moving parts in Redux.ACTION: It is an object which describes what happened.REDUCER: It determines how the State will change.For More Information, Click here.73) Explain the role of Reducer.Reducers read the payloads from the actions and then updates the Store via the State accordingly. It is a pure function which returns a new state from the initial State. It returns the previous State as it is if no work needs to be done.74) What is the significance of Store in Redux?A Store is an object which holds the application's State and provides methods to access the State, dispatch Actions and register listeners via subscribe(listener). The entire State tree of an application is saved in a single Store which makes the Redux simple and predictable. We can pass middleware to the Store which handles the processing of data as well as keep a log of various actions that change the Store's State. All the Actions return a new state via reducers.75) How is Redux different from Flux?The Redux is different from Flux in the following manner.SNReduxFlux1.Redux is an open-source JavaScript library used to manage application State.Flux is neither a library nor a framework. It is a kind of architecture that complements React as view and follows the concept of Unidirectional Data Flow model.2.Store's State is immutable.Store's State is mutable.3.In this, Store and change logic are separate.In this, the Store contains State and change logic.4.It has only a single Store.It can have multiple Store.5.Redux does not have Dispatcher concept.It has single Dispatcher, and all actions pass through that Dispatcher.76) What are the advantages of Redux?The main advantages of React Redux are:React Redux is the official UI bindings for react Application. It is kept up-to-date with any API changes to ensure that your React components behave as expected.It encourages good 'React' architecture.It implements many performance optimizations internally, which allows to components re-render only when it actually needs.It makes the code maintenance easy.Redux's code written as functions which are small, pure, and isolated, which makes the code testable and independent.77) How to access the Redux store outside a component?You need to export the Store from the module where it created with createStore() method. Also, you need to assure that it will not pollute the global window space.store = createStore(myReducer) export default store Some Most Frequently Asked React MCQ1) What is Babel in React?Babel is a transpiler.Babel is an interpreter.Babel is a compiler.Babel is both a compiler and a transpiler.Show Answer Workspace2) What do you understand by the Reconciliation process in React?The Reconciliation process is a process through which React updates the DOM.The Reconciliation process is a process through which React deletes the DOM.The Reconciliation process is a process through which React updates and deletes the component.It is a process to set the state.Show Answer Workspace3) Which of the following is used to pass data to a component from outside React applications?setStatepropsrender with argumentsPropTypesShow Answer Workspace4) Which of the following function allows you to render React content on an HTML page?React.mount()React.start()React.render()React.render()Show Answer Workspace5) Which of the following shows the correct phases of the component lifecycle?Mounting: getDerivedStateFromProps(); Updating: componentWillUnmount(); Unmounting: shouldComponentUpdate()Mounting: componentWillUnmount(); Updating: render(); Unmounting: setState()Mounting: componentDidMount(); Updating: componentDidUpdate(); Unmounting: componentWillUnmount()Mounting: constructor(); Updating: getDerivedStateFromProps(); Unmounting: render()Show Answer Workspace6) In MVC (Model, View, Controller) model, how can you specify the role of the React?React is the Middleware in MVC.React is the Controller in MVC.React is the Model in MVC.React is the Router in MVC.Show Answer Workspace7) Which of the following is the most precise difference between Controlled Component and Uncontrolled Component?In controlled components, every state mutation will have an associated handler function. On the other hand, the uncontrolled components store their states internally.The controlled components store their states internally, while in the uncontrolled components, every state mutation will have an associated handler function.The controlled component is good at controlling itself, while the uncontrolled component has no idea how to control itself.Every state mutation does not have an associated handler function in controlled components, while the uncontrolled components do not store their states internally.Show Answer Workspace8) What do the arbitrary inputs of components in React are called?KeysPropsElementsRefShow Answer Workspace9) What do you understand by the "key" prop in React?"Key" prop is used to look pretty, and there is no benefit whatsoever."Key" prop is a way for React to identify a newly added item in a list and compare it during the "diffing" algorithm."Key" prop is one of the attributes in HTML."Key" prop is NOT commonly used in the array.Show Answer Workspace10) Which of the following is the correct data flow sequence of flux concept in React?Action->Dispatcher->View->StoreAction->Dispatcher->Store->ViewAction->Store->Dispatcher->ViewNone of the above.Show Answer Workspace
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
1) What is GIT?Git is an open source distributed version control system and source code management (SCM) system with an insistence to control small and large projects with speed and efficiency.2) Which language is used in Git?Git uses 'C' language. Git is quick, and 'C' language makes this possible by decreasing the overhead of run times contained with high-level languages.3) What is a repository in Git?A repository consists of a list named .git, where git holds all of its metadata for the catalog. The content of the .git file is private to Git.4) What is 'bare repository' in Git?A "bare" repository in Git includes the version control information and no working files (no tree), and it doesn?t include the special. git sub-directory. Instead, it consists of all the contents of the .git sub-directory directly in the main directory itself, whereas working list comprises of:Play VideoxA .git subdirectory with all the Git associated revision history of your repo.A working tree, or find out copies of your project files.5) What is the purpose of GIT stash?GIT stash takes the present state of the working file and index and puts in on the stack for next and gives you back a clean working file. So in case if you are in the middle of object and require to jump over to the other task, and at the same time you don't want to lose your current edits, you can use GIT stash.6) What is GIT stash drop?When you are done with the stashed element or want to delete it from the directory, run the git 'stash drop' command. It will delete the last added stash item by default, and it can also remove a specific topic if you include as an argument.7) What are the advantages of using GIT?Here are some of the essential advantages of Git:Data repetition and data replication is possibleIt is a much applicable serviceFor one depository you can have only one directory of GitThe network performance and disk application are excellentIt is effortless to collaborate on any projectYou can work on any plan within the Git8) What is the function of 'GIT PUSH' in GIT?'GIT PUSH' updates remote refs along with related objects9) Why do we require branching in GIT?With the help of branching, you can keep your branch, and you can also jump between the different branches. You can go to your past work while at the same time keeping your recent work intact.10) What is the purpose of 'git config'?The 'Git config' is a great method to configure your choice for the Git installation. Using this command, you can describe the repository behavior, preferences, and user information.11) What is the definition of "Index" or "Staging Area" in GIT?When you are making the commits, you can make innovation to it, format it and review it in the common area known as 'Staging Area' or 'Index'.12) What is a 'conflict' in git?A 'conflict' appears when the commit that has to be combined has some change in one place, and the current act also has a change at the same place. Git will not be easy to predict which change should take precedence.13) What is the difference between git pull and git fetch?Git pull command pulls innovation or commits from a specific branch from your central repository and updates your object branch in your local repository.Git fetch is also used for the same objective, but it works in a slightly different method. When you behave a git fetch, it pulls all new commits from the desired branch and saves it in a new branch in your local repository. If you need to reflect these changes in your target branch, git fetch should be followed with a git merge. Your target branch will only be restored after combining the target branch and fetched branch. To make it simple for you, remember the equation below:Git pull = git fetch + git merge14) How to resolve a conflict in Git?If you need to resolve a conflict in Git, edit the list for fixing the different changes, and then you can run "git add" to add the resolved directory, and after that, you can run the 'git commit' for committing the repaired merge.15) What is the purpose of the git clone?The git clone command generates a copy of a current Git repository. To get the copy of a central repository, 'cloning' is the simplest way used by programmers.16) What is git pull origin?pull is a get and a consolidation. 'git pull origin master' brings submits from the master branch of the source remote (into the local origin/master branch), and then it combines origin/master into the branch you currently have looked out.17) What does git commit a?Git commits "records changes to the storehouse" while git push " updates remote refs along with contained objects" So the first one is used in a network with your local repository, while the latter one is used to communicate with a remote repository.18) Why GIT better than Subversion?GIT is an open source version control framework; it will enable you to run 'adaptations' of a task, which demonstrate the changes that were made to the code over time also it allows you keep the backtrack if vital and fix those changes. Multiple developers can check out, and transfer changes, and each change can then be attributed to a particular developer.19) Explain what is commit message?Commit message is a component of git which shows up when you submit a change. Git gives you a content tool where you can enter the adjustments made to a commit.20) Why is it desirable to create an additional commit rather than amending an existing commit?There are couples of reasonThe correct activity will devastate the express that was recently saved in a commit. If only the commit message gets changed, that's not a problem. But if the contents are being modified, chances of excluding something important remains more.Abusing "git commit- amends" can cause a small commit to increase and acquire inappropriate changes.21) What does 'hooks' comprise of in Git?This index comprises of Shell contents which are enacted after running the relating git commands. For instance, Git will attempt to execute the post-commit content after you run a commit.22) What is the distinction between Git and Github?A) Git is a correction control framework, a tool to deal with your source code history.GitHub is a hosting function for Git storehouses.GitHub is a website where you can transfer a duplicate of your Git archive. It is a Git repository hosting service, which offers the majority of the distributed update control and source code management (SCM) usefulness of Git just as including its features.23) In Git, how would you return a commit that has just been pushed and made open?There can be two answers to this question and ensure that you incorporate both because any of the below choices can be utilized relying upon the circumstance:Remove or fix the bad document in another commit and push it to the remote repository. This is a unique approach to correct a mistake. Once you have necessary changes to the record, commit it to the remote repository for that I will utilizegit submit - m "commit message."Make another commit that fixes all changes that were made in the terrible commit. to do this, I will utilize a commandgit revert 24) What does the committed item contain?Commit item contains the following parts; you should specify all the three present below:A set of records, representing to the condition of a task at a given purpose of timeReferences to parent commit objectsAn SHAI name, a 40 character string that uniquely distinguishes the commit object.25) Describing branching systems you have utilized?This question is a challenge to test your branching knowledge with Git along these lines, inform them regarding how you have utilized branching in your past activity and what reason does it serves, you can refer the below mention points:Feature Branching:A component branch model keeps the majority of the changes for a specific element within a branch. At the point when the item is throughout tested and approved by automated tests, the branch is then converged into master.Task BranchingIn this model, each assignment is actualized on its branch with the undertaking key included in the branch name. It is anything but difficult to see which code actualizes which task, search for the task key in the branch name.Release BranchingOnce the create branch has procured enough features for a discharge, you can clone that branch to frame a Release branch. Making this branch begins the following discharge cycle so that no new features can be included after this point, just bug fixes, documentation age, and other release oriented assignments ought to go in this branch. When it is prepared to deliver, the release gets converged into master and labeled with a form number. Likewise, it should be converged once again into creating a branch, which may have advanced since the release was started.At last, disclose to them that branching methodologies fluctuate starting with one association then onto the next, so I realize essential branching activities like delete, merge, checking out a branch, etc.26) By what method will you know in Git if a branch has just been combined into master?The appropriate response is immediate.To know whether a branch has been merged into master or not you can utilize the below commands:git branch - merged It records the branches that have been merged into the present branch.git branch - no merged It records the branches that have not been merged.27) How might you fix a messed up submit?To fix any messed up commit, you will utilize the order "git commit?correct." By running this direction, you can set the wrecked commit message in the editor.28) Mention the various Git repository hosting functions.The following are the Git repository hosting functions:PikacodeVisual Studio OnlineGitHubGitEnterpriseSourceForge.net29) Mention some of the best graphical GIT customers for LINUX?Some of the best GIT customer for LINUX isGit ColaSmart gitGit-gGit GUIGiggleqGit30) What is Subgit? Why use it?'Subgit' is a tool that migrates SVN to Git. It is a stable and stress-free migration. Subgit is one of the solutions for a company-wide migration from SVN to Git that is:It is much superior to git-svnNo need to change the infrastructure that is already placed.It allows using all git and all sub-version features.It provides stress ?free migration experience.
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
A list of top frequently asked J2EE Interview Questions and answers are given below.1) What do you understand by J2EE?J2EE stands for Java 2 Enterprise Edition. The functionality of J2EE is developing and deploying multi-tier web-based enterprise applications. The J2EE platform is the combination of a set of services, application programming interfaces (APIs), and protocols. The J2EE platform adds the capabilities required to provide a complete, stable, secure, and fast Java platform at the enterprise level.2) What do you mean by J2EE Module?A J2EE module is a software unit that consists of one or more J2EE components for the same container type with one deployment descriptor of that type. Modules can be easily deployed or assembled into J2EE applications.3) What are the four types of J2EE modules?J2EE defines four types of modules:46M887How to find Nth Highest Salary in SQLApplication Client ModuleWEB ModuleEnterprise JavaBeans ModuleResource Adapter Module4) What does the application client module contain?Application client module contains the following:Class filesClient deployment descriptorIt is packaged as JAR files with a .jar extension.5) What does the web module contain?The web module contains the following:JSP (Java Server Pages) filesClass files for servletsWeb deployment descriptorGIF (Graphics Interchange Format) and HTML (Hypertext Markup Language) filesThese modules are packaged as JAR files with a .war (Web Archive) extension.6) What does Enterprise JavaBeans module contain?The Enterprise JavaBeans (EJB) module contains the following:Class files for enterprise beansAn EJB deployment descriptorThese modules are packaged as JAR files with a .jar extension.7) What does resource adapt module contain?The resource adapter module contains the following:Java interfacesClassesNative librariesOther documentationResource Adapter deployment descriptorThese modules are packaged as JAR files with a .rar (Resource Adapter Archive) extension.8) What are the main components of the J2EE application?A J2EE component is assembled into a J2EE application with its related classes and files. It can also communicate with other components. The J2EE defines the following main components:Application clients components.Java Servlet and JavaServer Pages technology components.Business Components (Enterprise JavaBeans).Resource adaptor components.9) What is considered as a web component?Java Servlet and Java Server Pages technology components are considered as web components. Servlets are based on Java programming language which dynamically receives requests and generates responses. Java Server pages execute as servlets and allow a more natural approach to creating static content.10) What are the types of J2EE clients?AppletsApplication clientsJava Web Start-enabled clientsWireless clients11) What do you understand by a word applet?An applet is a J2EE component that typically executes in a web browser. It can also be executed in a variety of other applications or devices that support the applet programming model.12) What is the container?A container is the runtime support of a system level entity. Containers provide components with features such as lifecycle management, security, deployment, and threading.13) What is an "applet container"?A container that provides support for the applet programming model is known as "applet container."14) What do you understand by a thin client?A thin client is a light-weight interface to the application that does not support operations like query database, execute complex business rules, or connect to legacy applications.15) What is JavaServer Faces (JSF)?JavaServer Faces is a user interface (UI) designing framework which is used for Java-based web applications. JavaServer Faces provides a set of reusable UI components- a standard for web applications. JSF is based on the MVC design pattern. It automatically saves the form data to the server and populates the form dates when display on the client side.16) What is an EJB platform?EJB stands for The Enterprise JavaBeans. EJB platform manages functions such as transaction and state management, resource pooling, multithreading, and simple searches while you concentrate on writing business logic.17) What do you mean by the deployment descriptor?A deployment descriptor is based on XML (Extensible Markup Language) that supports .xml extension. It is used to describe a component's deployment settings. A J2EE application and its module, both have its deployment descriptor.18) Define the Struts in the J2EE framework?Struts is an application development framework based on MVC (Model-View-Controller) architecture. It is a combination of Java Servlets, JSP, Custom tags, and messages. It is used to design applications for large enterprises. It can be described as:ModelThe model defines the internal state of a system. It can be either single or a cluster of Java Beans based on app architecture.ViewJSP technology is used to design a view of any enterprise application.ControllerA controller is used to manage the actions of users. It processes the client request and responds based on the request. The main component in the framework is a servlet of class ActionServlet. This servlet is configured by defining a set of ActionMappings.19) Define Hashtable in J2EE?Hashtable is similar to HashMap except that Hashtable is synchronized. Hashtable is a cluster of synchronized objects where null values and duplicate values are not allowed.20) Define Hibernate and HQL?Hibernate is an object-relational mapping and query service. In hibernate, we can write HQL (Hibernate Query Language) scripts instead of SQL, which saves lots of time and effort. Hibernate provides a more powerful association, inheritance, polymorphism, composition, and collections. We can process queries easily into the database using the Java objects. Hibernate also allows us to express queries using Java-based criteria.21) What are the limitations of Hibernate?Following are some limitations of hibernate:Slower execution of queries.Only HQL support is available for composite keys.No shared references are available to the value type.22) What are the major benefits of hibernate?Following are some major benefits of hibernate:Hibernate is independent of database and vendor so it is the portable framework.Domain objects can be mapped to the relational database.JPA support for standard ORM.Better database connectivity in Hibernate when compared to JDBC.23) Define ORM and its working in J2EE?ORM refers to Object-Relational mapping. It is the object in a Java class which is mapped into the tables of a relational database using the metadata that describes the mapping between the objects and database. It transforms the data from one representation to another.24) What is authorization?An authorization is a process by which access to a method or resource is determined. It relies on the determination of whether the principal associated with a request through authentication is in a given security role. A security role can be explained as a logical grouping of users defined by the person who assembles the application. A deployer maps the security roles to security identities. Security identities may be principles or groups in the operational environment.25) Define authorization constraint?An authorization rule which determines who is permitted to access Web resource collection is known as authorization constraint.26) How will you explain save() and saveorupdate() methods in hibernate?The Save() method in hibernate is used to store an object in the database. It creates a new entry if the record doesn't exist.The Saveorupdate() method in hibernate is used for updating the object using the identifier. If the identifier is unavailable, this method calls save(). If the identifier is available, it will call the update method.27) How will you explain load() and get() methods?Load(): If an object is missing in the Cache or database, Load() method will throw an exception. Load() method never returns null.Get(): If an object is missing in the Cache or database, Get() method returns a null value, not the exception.28) What is a web container in J2EE?An interface between a component and the low-level platform with defined functionality that is designed to support the component is defined as the web container in J2EE.29) What is the concept of connection pooling?Connection pooling is a simple concept which is popular to reuse existing connections. It means that if object connections are already well-defined and connected, then they can be reused whenever there is a requirement instead of generating a new one.30) What do you understand by the servlet?Servlet is a server-side component which provides full functionalities to create a server-side program. There are different servlets available with a specific design for a variety of protocols. The most popular type of protocol for the servlet is HTTP.Servlets, which use the classes in the java packages javax.servlet, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse, javax.servlet.http.HttpSession;. All servlets must include the Servlet interface, which defines life-cycle methods.31) Give some advantages of ORM (object-relational mapping)?ProductivityThe automatic code is generated to reduce the overall data access time based on the data model defined.PerformanceThe complete requirements of an application are managed by the automated code generated by the ORM, which means that there is no need for any extra code, and the overall data access process is made faster and optimized.Vendor IndependentThe generated code is independent of the vendor, which increases the overall portability of an application.MaintainabilityThe code is well tested and generated by the ORM, and only a developer can understand the code perfectly.32) Tell about the core interfaces of the hibernate framework?Session InterfaceSessionFactory InterfaceConfiguration InterfaceTransaction InterfaceQuery and Criteria Interface33) What is B2b?B2b indicates to business-to-business.34) What is the file extension used for hibernate mapping file and hibernate configuration file?For hibernate mapping, the file name should be like filename.hbm.xml.For hibernate configuration, the file name should be like hibernate.cfg.xml.35) Define a way to add Hibernate mapping file in hibernate configuration file?It can be easily performed by: 36) What are the main components of multi-tier architecture?The main components of multi-tier architecture are:Presentation TierThe front-end component existing in this tier is used to display the presentation.Resource TierThe back-end component existing in this tier is used to communicate with the database.Business TierThe component existing in this tier is used to provide business logic for the system.37) Explain JTA, JNDI, and JMS.JTA represents JAVA Transaction API, which is used for coordinating and managing transactions across the enterprise information system.JNDI represents Java Naming Directory Interface, which is used for accessing data from directory services.JMS represents the Java Messaging Service, which is used for receiving and sending messages through messaging systems.38) Explain the J2EE tiers.J2EE has the following tiers:Client TierIt indicates to the browser from which request is processed to the server. The interfaces that are available in this tier are HTML browser, Java application, an applet, or a non-java application.Middle TierIt comprises of a presentation tier and integration tier. The UI (User Interface) is created in the presentation tier using JavaServer Pages. The business logic is written inside the business tier with the help of Enterprise Java Bean. The objects of the database are created in the integration tier.BackendIt constitutes the Enterprise Information System (EIS) which is used to store the data.39) Describe the EAR, WAR, and JAR.EAR stands for Enterprise Archive file. It consists of the components of the web, EJB, and client. All the components of the EAR are packed in a compressed file with the extension .ear.WAR stands for a Web Archive file. It consists of all the components related to the web application. All the components are packed in a compressed file with the extension .war.JAR stands for Java Archive file. It consists of all the class files and library files, which constitute an API (Application Programming Interface). All the components are packed in a compressed file with the extension .jar.Each type of file (.ear, .war, and .jar) is processed uniquely by application servers, servlet containers, EJB containers, etc.40) What do you understand by Spring?Spring is a light-weight open source framework for developing enterprise applications. It resolves the complexity of enterprise application development and provides easy development for the J2EE. It was initially written by Rod Johnson. It was released under the Apache 2.0 license in June 2003.41) What are the different modules used in Spring?There are mainly seven core modules in spring:The Core container moduleObject/Relational mapping moduleDAO moduleApplication context moduleAspect Oriented ProgrammingWeb moduleMVC module42) What is action mapping?In action mapping, a user specifies action class for a particular URL, i.e., path and different target view, which means, forwards on to which request-response is forwarded. The ActionMapping defines the information that the ActionServlet knows about the mapping of a particular request to an instance of a specific Action class. The mapping is transferred to the execute() method of the Action class, enabling access to this information directly.43) What do you understand by ActionForm?ActionForm is a Java bean which may associate one or more ActionMappings. A java bean changes to FormBean when a user extends a class org.apache.struts.action.ActionForm. ActionForm object is generally populated on the server side automatically, and the client enters data from UI. ActionForm manages the session state for a web application.44) What is backing bean?A backing bean is a JavaBeans component which corresponds to JavaServer Pages that includes JavaServer Faces components. The backing bean describes the properties for the components on the page and methods which perform processing for the component.This processing may include event handling, validation, and processing associated with navigation.45) What is the build file?A build file is an XML file that consists of one or more asant targets. A target is a set of tasks that a user wants to get executed. When starting asant, a user can select which target is to be executed. If there is no target, then the default target of the project is executed.46) What do you understand by business logic?Business logic is the code that includes the functionality of an application. In the EJB (Enterprise JavaBeans) architecture, this logic is implemented by the methods of an enterprise bean.47) How will you explain CDATA?A CDATA is a predefined XML tag for the character data, which means "don't interpret these characters," it is similar to parsed character data (PCDATA), in which the standard rules of XML syntax apply. CDATA sections are used to show examples of XML syntax.48) What do you mean by the Component Contract?The contract between the J2EE component and its container is known as the component contract. This type of contract includes:Life-cycle management of the componentAn interface which is used by instance to obtain various information and services from its containerList of services that every container must provide for its components.49) What do you understand by Connector? Explain Connector Architecture.A connector is a standard extension mechanism for containers, which provides connectivity to enterprise information systems. It is specific to an enterprise information system and contains a resource adapter and application development tools for enterprise information system connectivity. The resource adapter is plugged into a container through its support for system-level contracts, defined in the Connector architecture.An architecture for the integration of J2EE products with enterprise information systems is known as the connector architecture. A connector architecture consists of:A resource adapter which is given by an enterprise information system vendorA J2EE product that allows this resource adapter to plug in.Connector architecture also defines a set of contracts which a resource adapter must support to plug into a J2EE product (e.g., transactions, security, and resource management).
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
A list of top frequently asked Deep Learning Interview Questions and answers are given below.1) What is deep learning?Deep learning is a part of machine learning with an algorithm inspired by the structure and function of the brain, which is called an artificial neural network. In the mid-1960s, Alexey Grigorevich Ivakhnenko published the first general, while working on deep learning network. Deep learning is suited over a range of fields such as computer vision, speech recognition, natural language processing, etc.2) What are the main differences between AI, Machine Learning, and Deep Learning?AI stands for Artificial Intelligence. It is a technique which enables machines to mimic human behavior.Machine Learning is a subset of AI which uses statistical methods to enable machines to improve with experiences.Deep learning is a part of Machine learning, which makes the computation of multi-layer neural networks feasible. It takes advantage of neural networks to simulate human-like decision making.3) Differentiate supervised and unsupervised deep learning procedures.Supervised learning is a system in which both input and desired output data are provided. Input and output data are labeled to provide a learning basis for future data processing.Unsupervised procedure does not need labeling information explicitly, and the operations can be carried out without the same. The common unsupervised learning method is cluster analysis. It is used for exploratory data analysis to find hidden patterns or grouping in data.4) What are the applications of deep learning?There are various applications of deep learning:Computer visionNatural language processing and pattern recognitionImage recognition and processingMachine translationSentiment analysisQuestion Answering systemObject Classification and DetectionAutomatic Handwriting GenerationAutomatic Text Generation.5) Do you think that deep network is better than a shallow one?Both shallow and deep networks are good enough and capable of approximating any function. But for the same level of accuracy, deeper networks can be much more efficient in terms of computation and number of parameters. Deeper networks can create deep representations. At every layer, the network learns a new, more abstract representation of the input.Play Videox6) What do you mean by "overfitting"?Overfitting is the most common issue which occurs in deep learning. It usually occurs when a deep learning algorithm apprehends the sound of specific data. It also appears when the particular algorithm is well suitable for the data and shows up when the algorithm or model represents high variance and low bias.7) What is Backpropagation?Backpropagation is a training algorithm which is used for multilayer neural networks. It transfers the error information from the end of the network to all the weights inside the network. It allows the efficient computation of the gradient.Backpropagation can be divided into the following steps:It can forward propagation of training data through the network to generate output.It uses target value and output value to compute error derivative concerning output activations.It can backpropagate to compute the derivative of the error concerning output activations in the previous layer and continue for all hidden layers.It uses the previously calculated derivatives for output and all hidden layers to calculate the error derivative concerning weights.It updates the weights.8) What is the function of the Fourier Transform in Deep Learning?Fourier transform package is highly efficient for analyzing, maintaining, and managing a large databases. The software is created with a high-quality feature known as the special portrayal. One can effectively utilize it to generate real-time array data, which is extremely helpful for processing all categories of signals.9) Describe the theory of autonomous form of deep learning in a few words.There are several forms and categories available for the particular subject, but the autonomous pattern represents independent or unspecified mathematical bases which are free from any specific categorizer or formula.10) What is the use of Deep learning in today's age, and how is it adding data scientists?Deep learning has brought significant changes or revolution in the field of machine learning and data science. The concept of a complex neural network (CNN) is the main center of attention for data scientists. It is widely taken because of its advantages in performing next-level machine learning operations. The advantages of deep learning also include the process of clarifying and simplifying issues based on an algorithm due to its utmost flexible and adaptable nature. It is one of the rare procedures which allow the movement of data in independent pathways. Most of the data scientists are viewing this particular medium as an advanced additive and extended way to the existing process of machine learning and utilizing the same for solving complex day to day issues.11) What are the deep learning frameworks or tools?Deep learning frameworks or tools are:Tensorflow, Keras, Chainer, Pytorch, Theano & Ecosystem, Caffe2, CNTK, DyNetGensim, DSSTNE, Gluon, Paddle, Mxnet, BigDL12) What are the disadvantages of deep learning?There are some disadvantages of deep learning, which are:Deep learning model takes longer time to execute the model. In some cases, it even takes several days to execute a single model depends on complexity.The deep learning model is not good for small data sets, and it fails here.13) What is the meaning of term weight initialization in neural networks?In neural networking, weight initialization is one of the essential factors. A bad weight initialization prevents a network from learning. On the other side, a good weight initialization helps in giving a quicker convergence and a better overall error. Biases can be initialized to zero. The standard rule for setting the weights is to be close to zero without being too small.14) Explain Data Normalization.Data normalization is an essential preprocessing step, which is used to rescale values to fit in a specific range. It assures better convergence during backpropagation. In general, data normalization boils down to subtracting the mean of each data point and dividing by its standard deviation.15) Why is zero initialization not a good weight initialization process?If the set of weights in the network is put to a zero, then all the neurons at each layer will start producing the same output and the same gradients during backpropagation.As a result, the network cannot learn at all because there is no source of asymmetry between neurons. That is the reason why we need to add randomness to the weight initialization process.16) What are the prerequisites for starting in Deep Learning?There are some basic requirements for starting in Deep Learning, which are:Machine LearningMathematicsPython Programming17) What are the supervised learning algorithms in Deep learning?Artificial neural networkConvolution neural networkRecurrent neural network18) What are the unsupervised learning algorithms in Deep learning?Self Organizing MapsDeep belief networks (Boltzmann Machine)Auto Encoders19) How many layers in the neural network?Input LayerThe input layer contains input neurons which send information to the hidden layer.Hidden LayerThe hidden layer is used to send data to the output layer.Output LayerThe data is made available at the output layer.20) What is the use of the Activation function?The activation function is used to introduce nonlinearity into the neural network so that it can learn more complex function. Without the Activation function, the neural network would be only able to learn function, which is a linear combination of its input data.Activation function translates the inputs into outputs. The activation function is responsible for deciding whether a neuron should be activated or not. It makes the decision by calculating the weighted sum and further adding bias with it. The basic purpose of the activation function is to introduce non-linearity into the output of a neuron.21) How many types of activation function are available?Binary StepSigmoidTanhReLULeaky ReLUSoftmaxSwish22) What is a binary step function?The binary step function is an activation function, which is usually based on a threshold. If the input value is above or below a particular threshold limit, the neuron is activated, then it sends the same signal to the next layer. This function does not allow multi-value outputs.23) What is the sigmoid function?The sigmoid activation function is also called the logistic function. It is traditionally a trendy activation function for neural networks. The input data to the function is transformed into a value between 0.0 and 1.0. Input values that are much larger than 1.0 are transformed to the value 1.0. Similarly, values that are much smaller than 0.0 are transformed into 0.0. The shape of the function for all possible inputs is an S-shape from zero up through 0.5 to 1.0. It was the default activation used on neural networks, in the early 1990s.24) What is Tanh function?The hyperbolic tangent function, also known as tanh for short, is a similar shaped nonlinear activation function. It provides output values between -1.0 and 1.0. Later in the 1990s and through the 2000s, this function was preferred over the sigmoid activation function as models. It was easier to train and often had better predictive performance.25) What is ReLU function?A node or unit which implements the activation function is referred to as a rectified linear activation unit or ReLU for short. Generally, networks that use the rectifier function for the hidden layers are referred to as rectified networks.Adoption of ReLU may easily be considered one of the few milestones in the deep learning revolution.26) What is the use of leaky ReLU function?The Leaky ReLU (LReLU or LReL) manages the function to allow small negative values when the input is less than zero.27) What is the softmax function?The softmax function is used to calculate the probability distribution of the event over 'n' different events. One of the main advantages of using softmax is the output probabilities range. The range will be between 0 to 1, and the sum of all the probabilities will be equal to one. When the softmax function is used for multi-classification model, it returns the probabilities of each class, and the target class will have a high probability.28) What is a Swish function?Swish is a new, self-gated activation function. Researchers at Google discovered the Swish function. According to their paper, it performs better than ReLU with a similar level of computational efficiency.29) What is the most used activation function?Relu function is the most used activation function. It helps us to solve vanishing gradient problems.30) Can Relu function be used in output layer?No, Relu function has to be used in hidden layers.31) In which layer softmax activation function used?Softmax activation function has to be used in the output layer.32) What do you understand by Autoencoder?Autoencoder is an artificial neural network. It can learn representation for a set of data without any supervision. The network automatically learns by copying its input to the output; typically,internet representation consists of smaller dimensions than the input vector. As a result, they can learn efficient ways of representing the data. Autoencoder consists of two parts; an encoder tries to fit the inputs to the internal representation, and a decoder converts the internal state to the outputs.33) What do you mean by Dropout?Dropout is a cheap regulation technique used for reducing overfitting in neural networks. We randomly drop out a set of nodes at each training step. As a result, we create a different model for each training case, and all of these models share weights. It's a form of model averaging.34) What do you understand by Tensors?Tensors are nothing but a de facto for representing the data in deep learning. They are just multidimensional arrays, which allows us to represent the data having higher dimensions. In general, we deal with high dimensional data sets where dimensions refer to different features present in the data set.35) What do you understand by Boltzmann Machine?A Boltzmann machine (also known as stochastic Hopfield network with hidden units) is a type of recurrent neural network. In a Boltzmann machine, nodes make binary decisions with some bias. Boltzmann machines can be strung together to create more sophisticated systems such as deep belief networks. Boltzmann Machines can be used to optimize the solution to a problem.Some important points about Boltzmann Machine-It uses a recurrent structure.It consists of stochastic neurons, which include one of the two possible states, either 1 or 0.The neurons present in this are either in an adaptive state (free state) or clamped state (frozen state).If we apply simulated annealing or discrete Hopfield network, then it would become a Boltzmann Machine.36) What is Model Capacity?The capacity of a deep learning neural network controls the scope of the types of mapping functions that it can learn. Model capacity can approximate any given function. When there is a higher model capacity, it means that the larger amount of information can be stored in the network.37) What is the cost function?A cost function describes us how well the neural network is performing with respect to its given training sample and the expected output. It may depend on variables such as weights and biases.It provides the performance of a neural network as a whole. In deep learning, our priority is to minimize the cost function. That's why we prefer to use the concept of gradient descent.38) Explain gradient descent?An optimization algorithm that is used to minimize some function by repeatedly moving in the direction of steepest descent as specified by the negative of the gradient is known as gradient descent. It's an iteration algorithm, in every iteration algorithm, we compute the gradient of a cost function, concerning each parameter and update the parameter of the function via the following formula:Where,Θ - is the parameter vector,α - learning rate,J(Θ) - is a cost functionIn machine learning, it is used to update the parameters of our model. Parameters represent the coefficients in linear regression and weights in neural networks.39) Explain the following variant of Gradient Descent: Stochastic, Batch, and Mini-batch?Stochastic Gradient DescentStochastic gradient descent is used to calculate the gradient and update the parameters by using only a single training example.Batch Gradient DescentBatch gradient descent is used to calculate the gradients for the whole dataset and perform just one update at each iteration.Mini-batch Gradient DescentMini-batch gradient descent is a variation of stochastic gradient descent. Instead of a single training example, mini-batch of samples is used. Mini-batch gradient descent is one of the most popular optimization algorithms.40) What are the main benefits of Mini-batch Gradient Descent?It is computationally efficient compared to stochastic gradient descent.It improves generalization by finding flat minima.It improves convergence by using mini-batches. We can approximate the gradient of the entire training set, which might help to avoid local minima.41) What is matrix element-wise multiplication? Explain with an example.Element-wise matrix multiplication is used to take two matrices of the same dimensions. It further produces another combined matrix with the elements that are a product of corresponding elements of matrix a and b.42) What do you understand by a convolutional neural network?A convolutional neural network, often called CNN, is a feedforward neural network. It uses convolution in at least one of its layers. The convolutional layer contains a set of filter (kernels). This filter is sliding across the entire input image, computing the dot product between the weights of the filter and the input image. As a result of training, the network automatically learns filters that can detect specific features.43) Explain the different layers of CNN.There are four layered concepts that we should understand in CNN (Convolutional Neural Network):ConvolutionThis layer comprises of a set of independent filters. All these filters are initialized randomly. These filters then become our parameters which will be learned by the network subsequently.ReLUThe ReLu layer is used with the convolutional layer.PoolingIt reduces the spatial size of the representation to lower the number of parameters and computation in the network. This layer operates on each feature map independently.Full CollectednessNeurons in a completely connected layer have complete connections to all activations in the previous layer, as seen in regular Neural Networks. Their activations can be easily computed with a matrix multiplication followed by a bias offset.44) What is an RNN?RNN stands for Recurrent Neural Networks. These are the artificial neural networks which are designed to recognize patterns in sequences of data such as handwriting, text, the spoken word, genomes, and numerical time series data. RNN use backpropagation algorithm for training because of their internal memory. RNN can remember important things about the input they received, which enables them to be very precise in predicting what's coming next.45) What are the issues faced while training in Recurrent Networks?Recurrent Neural Network uses backpropagation algorithm for training, but it is applied on every timestamp. It is usually known as Back-propagation Through Time (BTT).There are two significant issues with Back-propagation, such as:Vanishing GradientWhen we perform Back-propagation, the gradients tend to get smaller and smaller because we keep on moving backward in the Network. As a result, the neurons in the earlier layer learn very slowly if we compare it with the neurons in the later layers.Earlier layers are more valuable because they are responsible for learning and detecting simple patterns. They are the building blocks of the network.If they provide improper or inaccurate results, then how can we expect the next layers and complete network to perform nicely and provide accurate results. The training procedure tales long, and the prediction accuracy of the model decreases.Exploding GradientExploding gradients are the main problem when large error gradients accumulate. They provide result in very large updates to neural network model weights during training.Gradient Descent process works best when updates are small and controlled. When the magnitudes of the gradient accumulate, an unstable network is likely to occur. It can cause poor prediction of results or even a model that reports nothing useful.46) Explain the importance of LSTM.LSTM stands for Long short-term memory. It is an artificial RNN (Recurrent Neural Network) architecture, which is used in the field of deep learning. LSTM has feedback connections which makes it a "general purpose computer." It can process not only a single data point but also entire sequences of data.They are a special kind of RNN which are capable of learning long-term dependencies.47) What are the different layers of Autoencoders? Explain briefly.An autoencoder contains three layers:EncoderThe encoder is used to compress the input into a latent space representation. It encodes the input images as a compressed representation in a reduced dimension. The compressed images are the distorted version of the original image.CodeThe code layer is used to represent the compressed input which is fed to the decoder.DecoderThe decoder layer decodes the encoded image back to its original dimension. The decoded image is a reduced reconstruction of the original image. It is automatically reconstructed from the latent space representation.48) What do you understand by Deep Autoencoders?Deep Autoencoder is the extension of the simple Autoencoder. The first layer present in DeepAutoencoder is responsible for first-order functions in the raw input. The second layer is responsible for second-order functions corresponding to patterns in the appearance of first-order functions. Deeper layers which are available in the Deep Autoencoder tend to learn even high-order features.A deep autoencoder is the combination of two, symmetrical deep-belief networks:First four or five shallow layers represent the encoding half.The other combination of four or five layers makes up the decoding half.49) What are the three steps to developing the necessary assumption structure in Deep learning?The procedure of developing an assumption structure involves three specific actions.The first step contains algorithm development. This particular process is lengthy.The second step contains algorithm analyzing, which represents the in-process methodology.The third step is about implementing the general algorithm in the final procedure. The entire framework is interlinked and required for throughout the process.50) What do you understand by Perceptron? Also, explain its type.A perceptron is a neural network unit (an artificial neuron) that does certain computations to detect features. It is an algorithm for supervised learning of binary classifiers. This algorithm is used to enable neurons to learn and processes elements in the training set one at a time.There are two types of perceptrons:Single-Layer PerceptronSingle layer perceptrons can learn only linearly separable patterns.Multilayer PerceptronsMultilayer perceptrons or feedforward neural networks with two or more layers have the higher processing power.
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
A list of top frequently asked Salesforce Interview Questions and answers are given below.1) What is Salesforce?Salesforce is a cloud-based service. It is a customer relationship management (CRM) platform. Salesforce is a software as a Service (SaaS). It helps you to manage customer relationships, integrate with other systems, and build apps. The tool helps you to manage and create a custom solution as per your business requirement.Before Salesforce, companies had their servers for customer relationship management (CRM). They were costly and time taking. They were very hard to use. Feasible solution to this problem is to build an affordable CRM software and delivering it entirely online as a service.In a few years, there has been a significant surge in cloud computing technologies. Salesforce has an immense impact on the world of computing. Salesforce is developed as the fifth largest software company in the world. It is the top CRM service provider.Play Videox2) What is an app in Salesforce?An app is a collection of tabs that works as a unit to provide functionality. Users can switch between apps in force.com app's drop-down menu.A Salesforce application is a container of tabs, processes, and services.We can create new apps by grouping some standard app, customize existing apps according to our work.Salesforce provides many standard apps such as call center, marketing, sales, etc.There are two types of Salesforce application:Custom AppService cloud controlSalesforce create applicationFollow the below steps to create a Salesforce app:Step1: Follow this navigation: Setup-> AppSetup-> create-> apps-> click on 'new'Step2: Select custom application radio button-> provide the app nameStep3: Click on the next buttonStep4: Select the image from the document objectStep5: Select the objectsStep6: Click on the visible checkbox and saveTo add this Salesforce app to any other profiles or tabs. Follow the below stepsStep1: Setup-> Appsetup-> create-> apps->Step2: Select the app from the list and click on editStep3: If we want to change the image, then click on insert an image and take the image from documents.3) What are objects in Salesforce?Objects are the database tables in Salesforce. Objects allow storing data specific to the organization in Salesforce.There are two types of objects in Salesforce:Standard objectsCustom objectsStandard objectsStandard objects are such objects that are inbuilt in Salesforce.com.Example: accounts, contacts, products, leads, opportunities, campaigns, users, contracts, Report, and dashboards, etc.Custom objectsCustom objects are such objects that are created by us. These objects are user-defined objects. Custom objects store information which is important and unique to our organization.Custom objects are an integral part of any application. They provide a structure for sharing data.Custom objects hold the following properties.Custom fieldsRelationship to other objectsPage layoutsA custom user interface tab4) What are user profiles in Salesforce?User profiles are a group of permissions and settings which provide lightning access to a user. Salesforce admins can assign the users with a profile depending upon their job roles. The user profile includes all the tabs, records, and page access that user require.You can set up and manage the profile by which you can conventionally create a secure boundary that dictates user's access rights.5) Can we assign the same profile to two different users? Is it possible that two profiles can be assigned to the same user?The profile defines the level of access a user can have in Salesforce.In Salesforce org, it is possible to assign a single profile to any number of users. For example, we can consider a sales or service team in a company. The entire team has access to the same profile. The admin can create one profile for the whole sales team, which will have access to the leads, campaigns, contacts, and other objects deemed necessary by the company.In this functionality, many users can be assigned with the same profile. In case the team leader needs access to additional records, then it can be done by assigning permission sets only for those users.Each user can be assigned only one profile.6) What is the difference between Force.com and Salesforce.com?Salesforce.com is a software as a service (SaaS), and Force.com is a platform as a service (PaaS).7) What is the relationship in Salesforce? What are its types?We can establish a relation between objects in Salesforce. We can associate one object with others.Example: We have an object party (To store information about the party), and you want to associate it with other objects like people (information of participants) so that you can associate the object party with people. These relationship types also determine how they handle record sharing, required field in page layouts, data deletion capability.Salesforce supports the following types of relationships that can be established among objects.Master-Detail RelationshipLookup relationshipSelf-relationshipExternal lookup relationshipIndirect lookup relationshipMany-to-Many relationshipHierarchical relationship8) What is Master-detail relationship?It's a tightly coupled relationship among Salesforce objects. In the Master-detail relationship, the parent record controls the behavior of the child record regarding visibility and sharing. If a master record gets deleted, then the child records associated with it are also deleted. The security setting of the parent object applies to the child object.Example:If we create a Master-detail relationship between the objects party and people. Where a party is a parent object, and people is a child object. Then if we delete party record, all the associated record will also get deleted.When two objects form a Master-detail relationship, we can create a unique type of field over the master object, called Roll-up summary.A Roll-up summary allows us to calculate values related to child record, such as the number of child record, average, sum, etc., linked to a parent record.9) What is the lookup relationship?It's a loosely coupled relationship among Salesforce objects. In the Lookup relationship, both parent and child have their sharing setting and security controls, which means if a parent record gets deleted, then child records remain in the system.Let's see the party and the people objects. For example, the below figure provdes a visual representation of the Lookup relationship between the party and people objects.In this diagram, the party object record has been deleted, but the people record is still available. This relationship between objects is Lookup relationship.10) What are reports in Salesforce?Reports are an essential part of any business. Descriptions provide a clear picture of the management.Reports are used to track the process towards its various tasks, control expenditure, increase revenue. Reports help in trend prediction.Salesforce.com allows you to generate reports in different styles.In Salesforce.com, we can create four types of reports:Tabula reportsSummary ReportsMatrix reportJoined report11) What are some Governor limits in Salesforce?Governor limits control how much data and how many records you can store in the shared databases because Salesforce is a multi-tenant architecture based. In other words, Salesforce uses a single database to store the data of multiple customers.Salesforce introduced the concept of the Governor limits to prevent monopolization of the shared resources between users.Governor limits are the biggest challenge of a Salesforce developer. This is because if the apex code exceeds the limit, then the issue is a runtime exception that can't be handled. So as a Salesforce developer, you should be very careful while developing application.Here is a list of some significant Governor limits.Pre-transaction Apex limitsStatic Apex limitsSize-Specific Apex limitsMiscellaneous Apex limitsForce.com Platform Apex LimitsEmail limitsPush Notification limits12) What are the different ways to store various types of records in Salesforce?There are many different ways in Salesforce to store various records, such as Images, files, and documents. Some of them are as follows:AttachmentGoogle driveChatter filesLibraries13) What is the fiscal year in Salesforce?Starting and ending date of a company financial year is considered as Fiscal year. The fiscal year is used to calculate annual financial statements in business and other organizations. Salesforce has two types of the fiscal year:Standard Fiscal yearCustom Fiscal yearStandard fiscal yearSalesforce provides a calendar by default as the standard fiscal year. It's a Gregorian calendar. But it is not necessary that all the organizations use the same calendar, some organizations use different calendars and need to change the fiscal year start month. It can be defined whether the fiscal year is based on the start or end of the selected month.To set up a standard fiscal year navigate toSetup-> Administer->company profile-> Fiscal year Select the option of Standard Fiscal yearCustom fiscal yearWhen the standard fiscal year does not meet the requirements of the organization, then Custom fiscal year is used. To use the custom fiscal year, the administrator has to enable it. The administrator must define the fiscal year to fit the company's calendar.To set up the company's fiscal year, navigate toSetup-> Administer-> company profile-> Fiscal year Select the option of Custom Fiscal yearSelect the checkbox next to the terms statementClick on Enable custom fiscal yearClick on OK14) How many Master-detail relationship fields can be created in an object?There is a maximum of two Master-detail relationship fields possible in an object.15) How many Lookup relationship fields can be created in an object?There is a maximum of 40 Lookup relationship fields possible in an object.16) What are the benefits of the Salesforce?Salesforce is the largest and leading cloud platform provider in the world. Their customer relationship manager (CRM) is one of the most beneficial software.We get the following benefits by using CRMImproved understanding of the organizationEnhanced communication b/w client and service providerWe can facilitate the customer better by understanding them.Salesforce automates the repeated tasks.Salesforce reduces cost and cycle time.Salesforce improves the efficiency of teams.17) What is a sandbox org? What are the different types of sandboxes in Salesforce?A sandbox is a copy of the production org/environment. It is used for testing and development purpose. It is beneficial because it allows for the development of Apex programming without disturbing the production environment.Sandbox can be used when we want to test a newly developed force.com application. We can develop and test it in the sandbox org, there is no need to do it directly in production.There are four types of sandboxes are in Salesforce.com:DeveloperDeveloper ProPartial CopyFull18) What is Apex in Salesforce?Apex is a strongly typed object-oriented programming language. It allows developers to execute flow and transaction control statements on Salesforce server in combination with calls to the API. Its syntax looks like Java. It uses syntax and acts as a database stored procedure. Apex allows the developer to add business logic to system events like button clicks, related record update, and Visualforce pages.19) What is Visualforce?Visualforce is a framework for the Force.com platform. It is a component based markup language. It allows defining user interface component in Salesforce. Page layout feature will enable you to configure the user interface easily, but by using Visualforce pages, you can customize your user interface.20) Can you edit an apex trigger/ apex class in a production environment? Can you edit a Visualforce page in a production environment?No, we can't edit apex classes and triggers directly in the production environment.To edit an apex trigger/ class, first, it needs to be done in Developer edition or testing org or Sandbox org. Then, we can deploy it in production. A user that has apex permission must deploy the triggers and classes using deployment tools.Though, Visualforce page can be created and edited in both production and sandbox.21) Why are Visualforce pages served from a different domain?Visualforce pages are served from a different domain to block cross-site scripting and improve security standard.22) What are the static and dynamic dashboards? Can dynamic dashboards be scheduled?Static dashboards are the ordinary dashboards that will be visible to any other user who has made a report out of his data. Example; sales managers/ marketing manager would be able to see on his Salesforce Org. Moreover, a normal dashboard that shows the data for a single user.Dynamic dashboards display information which is customized for a specific user. Let's consider the above example. In case the sales manager wants to view the report of a particular team member, then he can use dynamic dashboards.We can use dynamic dashboards when we want to show user-specific data such as a particular user's quota, sales, productivity, meetings, etc.We can use a normal/ static dashboard to show regional and organization-wide data to a set of users, such as sale in a region, or team performance, etc.23) Which fields are automatically indexed in Salesforce?Following fields are automatically indexed in Salesforce:Primary keys (id, name and owner fields)Foreign keys (master-detail or lookup fields)Audit keys (Such as SystemModStamp)Custom fields ( These fields Patent as an External ID or a unique area)24) What are skinny tables?Salesforce can create skinny tables to avoid join and contain frequently used fields. Thin tables improve the performance of read-only operations. Skinny tables are set aside in sync with their source tables when the source table is modified.Contact Salesforce customer support to use skinny tables. These tables are created and used automatically where appropriate. We can't create, modify, or thin access tables.Considerations for skinny tablesIt can contain a maximum of 100 columns.Tables cannot contain fields from other objects.25) What is an Audit trail in Salesforce?The Audit trail tracks the recent setup changes that other administrators and you have made to your organization. This is useful for organizations that have more than one administrator.It can track the last twenty changes made to your organization. It displaysThe date and time of the change.Who made it (administrator name)What was before the change26) Can we delete a user in Salesforce?No, it is not possible to delete the user in the salesforce.27) Can we change the license when we create a profile?No, we cannot change the license after creating the profile.28) What is Deployment in Salesforce?In SFDC (Salesforce development cycle), you have to develop code in Sandbox, and then you might need to deploy this to another sandbox or production environment this is called deployment.In other words, the movement of metadata from one organization to another organization is called deployment. The main reason behind deployment is that you cannot develop apex in your Salesforce production.29) What are the different ways of deployment in Salesforce?Deployment can be done in the following ways.Change SetsEclipse with Force.com IDEForce.com Migration Tool - ANT/Java basedSalesforce Package30) What is the difference between a standard controller and a custom controller?The standard controller automatically contains all the standard object properties and standard button functionality. It contains all the functionalities and logic as used in standard Salesforce pages.Custom controllers are like an Apex class that implements all the logic of a page without taking an advantage of a standard controller. Custom controllers are related with Visualforce page through the controller attribute.31) What is cloud computing?Cloud computing is the provision of computational services such as storage, servers, database, software, networking, analytics, intelligence, and moreover the internet (cloud). It brings the organization faster innovation, flexibility in allocating resources, economies of scale. It reduces the costs of organizations that are associated with the task of storage.Cloud-based storage makes it possible to save the files in a remote database instead of proprietary hard drive or local storage device. It provides access to the data and the software programs to run it till an electronic accessory has access to the web.Cloud services can be both private and public. Private cloud services provide services to a certain number of people. On the other hand, public cloud services offer their services over the internet which is chargeable. These services are a group of networks that supply hosted services. Cloud services also provide a hybrid option, which combines both private and public services.32) What are the types of Cloud services?Based on services, cloud services provide users with a series of functionalities likeEmailBackup, Storage, and data retrievalCreating and testing appData AnalyzingAudio and Video streamingCloud computing is still a new service, but it comes in a trend in a very short time. Nowadays, government agencies, small businesses, non-profit agencies, and individual consumers are using cloud computing.Cloud computing is not a single part of technology like a microchip. It's a primary combination of three services software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS)33) How many certifications are available in Salesforce?There are eight kinds of certifications available in Salesforce which cater to different stages:Administrator certificationsDeveloper CertificationsArchitects CertificationsApp Builders CertificationsImplementation Experts or Consultant Certification.Marketers CertificationPardot Experts CertificationsCPQ Certification34) What is Salesforce environment?Environment or organization is the workspace for a particular user.For Example, if you sign up to a Facebook account, you will be provided a unique username and password. The same approach, if you subscribe to the Force.com cloud computing, you will be provided a valid credential to work in your specified cloud computing area of the environment or Org.The Salesforce environment provides accessibility to develop, test the apps, and can be used for production also. This environment can be customized according to your requirements such as Apex code, workflow, Custom DB attributes, and objects.35) How to check for user License in Salesforce work environment?To check the Salesforce license, open the SF workspace and navigate as follows:Setup-> Monitor->System Overview And go to Data Storage section, here you can see the user license in the highlighted area.If you want to check for all user licenses, then select the "Show All" option.36) What is MVC architecture in Visualforce?MVC is a widely used architecture design pattern which divides the design component in three phases Model, View, Controller.In Visualforce MVC, architecture can be implemented by using the standard as well as custom objects. Also, we can use three newly introduced Salesforce objects, pages, components, and controllers.These pages work like JSP pages, give the user-friendly presentation. Each view has an associated controller. Developers can write their controller using Apex programming language or can use a standard controller. VF has some auto-generated controller to interact with databases.37) When should apex be used?Apex can be used in different scenarios, such asTo create Email servicesTo create web servicesTo perform complex validation over multiple objectsTo create complex business processes that are not supported by the workflow.To create custom transaction logicTo attach custom logic to another operation38) How does Apex work?All Apex programs run on-demand exclusively on force.com platform.First, the application server compiles the apex code into an abstract set of instructions that can be understood by apex runtime interpreter.After the compilation, the compiled code is stored to metadata.And now, when the end users initiate the execution of apex by clicking the button or visual force page, the application servers retrieve the compiled instructions from the metadata and forward them to runtime interpreter before returning the result.39) What are the types of SOql statements in Salesforce?Salesforce Object Query Language is used to perform database operations in Salesforce.com. It is similar to the select statement in the widely used Structured Query Language (SQL), but it is designed especially for Salesforce data.With the use of SOql, we can create a simple but powerful query string in the following environments:In the query call () of the query String parameter.In apex statementsIn Visualforce controllers and getters methodIn the schema explorer of Force.com IDE40) What could be the reason to lose data in Salesforce?Few reasons to loss data in Salesforce are as follows:By changing date and date-timeBy migrating number, percent, currency from another datatype.By changing from the multi-select picklist, checkbox, auto number to other types.By altering the multi-select picklist from any type except picklistBy changing to auto number except for the textBy switching from text area to email, URL, phone, and text.41) What is Workflow?The workflow is a programmed process which is used to validate evaluation criteria and rule criteria.42) What is the difference between WhoID and WhatId?"WhoID" denotes the people like contacts or leads. Whereas "WhatId" denotes the objects. Let us consider LeadID, ContactID are fields of "WhoId" and AccountID, OpportunityID are "WhatId".43) What is Data Skew in Salesforce?When a maximum number of child records (more than 10k) are connected to one parent record that situation is called data skew in Salesforce.Data skew can be three typesAccount data skewOwnership skewLookup skew44) What is collection in Apex? List out all different kinds of collections supported by Salesforce?Collections in Apex are variables that are used to store multiple data records. As there is a limitation on the number of records to be retrieved per transaction, we can use the collection variable to retrieve records.There are three types of collections in Salesforce.ListMapsSets45) What are static Resources?Static resources are used to upload images, zip files, jar files, Javascript and CSS files that can be referred in a visual force page. We can upload a maximum of the 250mb file using Static resources.46) What is the difference between Action support and Action function?To understand the difference between Action support and Action function, let's understand their functionalities:Both Action support and action function are used to call a controller method via an Ajax request.The difference between them is as follows:Action function can call a controller method from JavaScript.Action function provides Ajax that supports another visualforce component and then call the controller method.Action function cannot support Ajax to another component. But from a particular component which supports Ajax (onclick, onblur, etc.) action function can be called to the controller method.47) How many types of email templates can be created in Salesforce?Different types of email templates can be created In Salesforce. Some of them are listed below.HTML with letterheadPeople who are having "Edit HTML Templates" permissions can create this template based on the letterhead.Custom HTMLPeople who are having "Edit HTML Templates" permissions can create this template without any letterhead.VisualforceOnly the administrator and developer can create this template. It provides some advanced functionalities like merging data from multiple records is available only in this template.48) How to handle comma within a field while uploading using Data Loader?If there is a comma in field content, you will have to enclose the contents within double quotation mark;" ".49) How many callouts to external service can be made in a single Apex transaction?An Apex transaction can make a maximum of 100 callouts to an HTTP request or an API call after that governor limits will restrict it.50) What is pagination in salesforce? How can we implement it in Visualforce?Pagination is a technique to display a large number of records and displaying the records on multiple pages. We use pagination instead of controlling the number of records displayed on each page.By default, a list controller shows 20 pages in a page. To customize it, we use a controller extension to set the Page Size.Take a look at the below sample code: {!opp.Name} FIRST NEXT PREVIOUS LAST
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
A list of top frequently asked TestNG Interview Questions and answers are given below.1) What is TestNG?TestNG stands for "Testing Next Generation". It is an` automation testing framework used for java programming language developed by Credric beust, and it comes after the inspiration from the JUnit framework. TestNG consists of all the features of JUnit framework but also contains some more additional features that make TestNG more powerful.2) What are the advantages of TestNG?The following are the advantages of TestNG are:It generates the report in a proper format, which includes the following information:Number of test cases executed.Number of test cases passed.Number of test cases failed.Number of test cases skippedMultiple test cases can be grouped easily by converting them into a testng.xml file, in which you can set the priority of each test case that determines which test case should be executed first.With the help of TestNG, you can execute the multiple test cases on multiple browsers known as cross-browser testing.The TestNG framework can be easily integrated with other tools such as Maven. Jenkins, etc.Annotations used in a TestNG framework are easily understandable such as @BeforeMethod, @AfterMethod, @BeforeTest, @AfterTest.WebDriver does not generate the reports while TestNG generates the reports in a readable format.TestNG simplifies the way the test cases are coded. We do not have to write the static main method. The sequence of actions is maintained by the annotations only.TestNG allows you to execute the test cases separately. For example, if you have six test cases, then one method is written for each test case. When we run the program, five methods are executed successfully, and the sixth method is failed. To remove the error, we need to run only the sixth method, and this can be possible only through TestNG. Because TestNG generates testng-failed.xml file in the test output folder, we will run only this xml file to execute the failed test case.3) How to run the test script in TestNG?You can run the test script in TestNG by clicking right click on the TestNG class, click on "Run As" and then select "TestNG test".Play Videox4) What are the annotations used in the TestNG?The following are the annotations used in the TestNG are:Precondition annotationsPrecondition annotations are executed before the execution of test methods The Precondition annotations are @BeforeSuite, @BeforeClass, @BeforeTest, @BeforeMethod.Test annotationTest annotation is specified before the definition of the test method. It is specified as @Test.Postcondition annotationsThe postcondition annotations are executed after the execution of all the test methods. The postcondition annotation can be @AfterSuite, @AfterClass, @AfterTest, @AfterMethod.5) What is the sequence of execution of all the annotations in TestNG?The sequence of execution of all the annotations in TestNG is given below:@BeforeSuite@BeforeTest@BeforeClass@BeforeMethod@Test@AfterSuite@AfterTest@AfterClass@AfterMethod6) How to set the priorities in TestNG?If we do not prioritize the test methods, then the test methods are selected alphabetically and executed. If we want the test methods to be executed in the sequence we want, then we need to provide the priority along with the @Test annotation.Let's understand through an example.package com.javatpoint; import org.testng.annotations.Test; public class Test_methods { @Test(priority=2) public void test1() { System.out.println("Test1"); } @Test(priority=1) public void test2() { System.out.print("Test2"); } } 7) Define grouping in TestNG?The group is an attribute in TestNG that allows you to execute the multiple test cases. For example, if we have 100 test cases of it_department and 10 test cases of hr_department, and if you want to run all the test cases of it_department together in a single suite, this can be possible only through the grouping.Let's understand through an example.package com.javatpoint; import org.testng.annotations.Test; public class Test_methods { @Test(groups="it_department") public void java() { System.out.println("I am a java developer"); } @Test(groups="it_department") public void dot_net() { System.out.println("I am a .Net developer"); } @Test(groups="it_department") public void tester() { System.out.println("I am a software tester"); } @Test (groups="hr") public void hr() { System.out.print("I am hr"); } } testng.xml?xml version="1.0" encoding="UTF-8"?> 8) What is dependency in TestNG?When we want to run the test cases in a specific order, then we use the concept of dependency in TestNG.Two types of dependency attributes used in TestNG:dependsOnMethodsThe dependsOnMethods attribute tells the TestNG on which methods this test will be dependent on, so that those methods will be executed before this test method.package com.javatpoint; import org.testng.annotations.Test; public class Login { @Test public void login() { System.out.println("Login page"); } @Test(dependsOnMethods="login") public void home() { System.out.println("Home page"); } } dependsOnGroupsIt is similar to the dependsOnMethods attribute. It allows the test methods to depend on the group of test methods. It executes the group of test methods before the dependent test method.package com.javatpoint; import org.testng.annotations.Test; public class Test_cases { @Test(groups="test") public void testcase1() { System.out.println("testcase1"); } @Test(groups="test") public void testcase2() { System.out.println("testcase2"); } @Test(dependsOnGroups="test") public void testcase3() { System.out.println("testcase3"); } } 9) What is timeOut in TestNG?While running test cases, there can be a case when some test cases take much more time than expected. In such a case, we can mark the test case as a failed test case by using timeOut.TimeOut in TestNG allows you to configure the time period to wait for a test to get completely executed. It can be configured in two levels:At the suit level: It will be available to all the test methods.At each method level: It will be available to a particular test method.The timeOut attribute can be specified as shown below:@Test( timeOut = 700) The above @Test annotation tells that the test method will be given 700 ms to complete its execution otherwise it will be marked as a failed test case.10) What is invocationCount in TestNG?An invocationCount in TestNG is the number of times that we want to execute the same test.package com.javatpoint; import org.testng.annotations.Test; public class Test_cases { @Test(invocationCount=5) public void testcase1() { System.out.println("testcase1"); } } Output11) What is the importance of testng.xml file?The testng.xml file is important because of the following reasons:It defines the order of the execution of all the test cases.It allows you to group the test cases and can be executed as per the requirements.It executes the selected test cases.In TestNG, listeners can be implemented at the suite level.It allows you to integrate the TestNG framework with tools such as Jenkins.12) How to pass the parameter in test case through testng.xml file?We can also pass the value to test methods at runtime, we can achieve this by sending parameter values through the testng.xml file. We can use the @Parameter annotation:@Parameter("param-name"); Let's understand through an example:package com.javatpoint; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.testng.annotations.Test; import org.testng.annotations.Parameters; public class Web { @Parameters({"text"}) @Test public void search() { // TODO Auto-generated method stub System.setProperty("webdriver.chrome.driver", "D:\\chromedriver.exe"); WebDriver driver=new ChromeDriver(); driver.get("http://www.google.com/"); driver.findElement(By.name("q")).sendKeys("javatpoint tutorial"); } } testng.xml file On running the testng.xml file, we get the output as shown below:13) How can we disable the test case from running?We can disable the test case from running by using the enabled attribute. We can assign the false value to the enabled attribute, in this way we can disable the test case from running.package com.javatpoint; import org.testng.annotations.Test; public class Test_cases { @Test(enabled=false) public void testcase1() { System.out.println("testcase1"); } @Test public void testcase2() { System.out.println("testcase2"); } } 14) What is the difference between soft assertion and hard assertion?Soft Assertion: In case of Soft Assertion, if TestNG gets an error during @Test, it will throw an exception when an assertion fails and continues with the next statement after the assert statement.Hard Assertion: In the case of Hard Assertion, if TestNG gets an error during @Test, it will throw an AssertException immediately when an assertion fails and stops execution after the assert statement.Let's understand through an example.package com.javatpoint; import org.testng.Assert; import org.testng.annotations.Test; import org.testng.asserts.SoftAssert; public class Assertion { SoftAssert soft_assert=new SoftAssert(); @Test public void Soft_Assert() { soft_assert.assertTrue(false); System.out.println("soft assertion"); } @Test public void Hard_Assert() { Assert.assertTrue(false); System.out.println("hard assertion"); } } Output15) What is the use of @Listener annotation in TestNG?TestNG provides different kinds of listeners which can perform different actions whenever the event is triggered. The most widely used listener in TestNG is ITestListener interface. The ITestListener interface contains methods such as onTestSuccess, onTestfailure, onTestSkipped, etc.Following are the scenarios that can be made:If the test case is failed, then what action should be performed by the listener.If the test case is passed, then what action should be performed by the listener.If the test case is skipped, then what action should be performed by the listener.Let's understand through an example.package com.javatpoint; import org.testng.Assert; import org.testng.annotations.Listeners; import org.testng.annotations.Test; @Listeners(com.javatpoint.Listener.class) public class Test_cases { @Test public void test_to_success() { Assert.assertTrue(true); } @Test public void test_to_fail() { Assert.assertTrue(false); } } Listener.javapackage com.javatpoint; import org.testng.ITestContext; import org.testng.ITestListener; import org.testng.ITestResult; public class Listener implements ITestListener { @Override public void onTestStart(ITestResult result) { // TODO Auto-generated method stub } @Override public void onTestSuccess(ITestResult result) { // TODO Auto-generated method stub System.out.println("Success of test cases and its details are : "+result.getName()); } @Override public void onTestFailure(ITestResult result) { // TODO Auto-generated method stub System.out.println("Failure of test cases and its details are : "+result.getName()); } @Override public void onTestSkipped(ITestResult result) { // TODO Auto-generated method stub System.out.println("Skip of test cases and its details are : "+result.getName()); } @Override public void onTestFailedButWithinSuccessPercentage(ITestResult result) { // TODO Auto-generated method stub System.out.println("Failure of test cases and its details are : "+result.getName()); } @Override public void onStart(ITestContext context) { // TODO Auto-generated method stub } @Override public void onFinish(ITestContext context) { // TODO Auto-generated method stub }} Output16) What is the use of @Factory annotation?The @Factory annotation is useful when we want to run multiple test cases through a single test class. It is mainly used for the dynamic execution of test cases.Let's understand through an example.testcase1.javapackage com.javatpoint; import org.testng.annotations.Test; public class Testcase1 { @Test public void test1() { System.out.println("testcase 1"); } } testcase2.javapackage com.javatpoint; import org.testng.annotations.Test; public class Testcase2 { @Test public void test1() { System.out.println("testcase 2"); } } Factory.javaimport org.testng.annotations.Factory; public class Factory1 { @Factory public Object[] getTestClasses() { Object tests[]=new Object[2]; tests[0]=new Testcase1(); tests[1]=new Testcase2(); return tests; } } 17) What is the difference between @Factory and @DataProvider annotation?@DataProvider: It is annotation used by TestNG to execute the test method multiple numbers of times based on the data provided by the DataProvider.@Factory: It is annotation used by the TestNG to execute the test methods present in the same test class using different instances of the respective class.
More detailsPublished - Tue, 06 Dec 2022
Created by - Admin s
A list of top frequently asked Informatica Interview Questions and answers are given below.1) What is Informatica? Why do we need it?Informatica is a software development firm founded in 1993 by Gaurav Dhillon and Diaz Nesamoney. Informatica is an ETL tool that offers data integration solutions. ETL tools are the tools used to extract, transform, and load the data. Therefore, we can say that Informatica is an ETL tool used to extract the data from one database and stores it in another database.Extract: The Extract is a process of extracting the data from one database. In this phase, an ETL tool extracts the data from multiple sources. Validation rules are applied to test whether they matches the expected values or not. If it fails the validation, data will be rejected.Transform: Transform is the process of converting the one form into another form in such a way the data can be placed in another database as well. Transformations include data formatting, resorting the column or row data, combining the two values into one and splitting the data into two or three values.Load: In the load phase, data is moved to the target database. Once the data is loaded, the ETL process is completed.2) What are the popular Informatica products?The following are the popular Informatica products:Power CenterPower MartPower ExchangePower Center ConnectPower ChannelMetadata ExchangePower AnalyzerSuper Glue3) What is Informatica PowerCenter?Informatica PowerCenter is an ETL tool used to build the enterprise data warehouses. It is highly available, fully-scalable, and high performing tool.Play VideoxIt provides reliable solutions to the IT management team as it delivers not only data to meet the operational and analytical requirements of the business, but also supports various data integration projects4) What is a data warehouse?The data warehouse is a technique of integrating data from multiple sources. It involves analytical reporting, data integration, data cleaning, and data consolidations.A data warehouse is mainly designed for query and data analysis purpose instead of transaction processing.It is used to transform the information into useful data whenever the user required.The data warehouse is an environment, not a product that provides the current and historical decision support information to the users, which is not possible to access the traditional operational database.The data which is processed and transformed in the data warehouse can be accessed by using the Business Intelligence tools, SQL Clients, and spreadsheets.5) How can Informatica be used for an organization?Informatica in an organization can be used in the following ways:Data Migration: Data Migration means transferring the data from the traditional system to a new database system.Data Warehousing: Informatica is an ETL tool used for moving the data from the database or production system to the data warehouse. This process is known as Data Warehousing.Data Integration: Data Integration means integrating the data from multiple sources or file-based systems. For example, cleaning up the data.6) Explain the Informatica workflow?Informatica workflow is a collection of tasks which are connected with the starting task and triggers the proper sequence to execute the process.Workflow is created either manually or automatically by using the workflow designer tool.7) Mention some types of transformation?8) What is the difference between active and passive transformation?An active transformation is a transformation that changes the number of rows when the source table is passed through it. For example, Aggregator transformation is a type of active transformation that performs the aggregations on groups such as sum and reduces the number of rows.A passive transformation is a transformation that does not change the number of rows when the source data is passed through it, i.e., neither the new rows are added, nor existing rows are dropped. In this transformation, the number of output and input rows are the same.9) Explain the difference between a data warehouse and a data mart?Data warehouse and Data mart are the structured repositories that store and manage the data. A data warehouse is used to store the data centrally for the entire business while data mart is used to store the specific data, not the entire business data. Querying the data from the data warehouse is a very tedious task, so data mart is used. The data mart is a collection of smaller sets of data which allows you to access the data faster and efficiently.10) What is repository manager?A repository is a place or a relational database used to store the information or metadata. Metadata can include various information such as mappings that describes how to transform the data, sessions describe when you want the Informatica server to perform the transformations, also stores the administrative information such username and password, permissions and privileges, and product version. The repository is created and maintained by the Repository Manager client tool.Repository Manager is a manager that manages and organizes the repository. Repository Manager can create the folders to organize the data and groups to handle multiple users.11) What is mapping?Mapping is a pipeline or structural flow of data that describes how data flows from source to the destination through transformations.Mapping consists of the following components:Source Definition: Source Definition defines the structure and characteristics of the source such as data types, type of the data source, etc. You can create more than one source definitions by using the Informatica Source Analyzer.Target Definition: Target Definition defines the final destination or target where the data will be loaded.Transformation: Transformation defines how source data should be transformed, and various functions are applied during the transformation process.Links: Links define how data should flow from source definition to the target table by performing different transformations.12) What is session?A session is a property in Informatica that have a set of instructions to define when and how to move the data from the source table to the target table.A session is like a task that we create in workflow manager. Any session that you create must have a mapping associated with it.Session must have a single mapping at a time, and it cannot be changed.In order to execute the session, it must be added to the workflow.A session can either be a reusable or non-reusable object where reusable means that we can use the data for multiple rows.13) What is Designer?A designer is a graphical user interface that builds and manage the objects like source table, target table, Mapplets, Mappings, and transformations.Mapping in Designer is created by using the Source Analyzer to import the source table, and target designer is used to import the target table.Designer components:Designer contains multiple components:NavigatorNavigator is used to perform the following activities:It is used to connect with Repository service.It is used to open folders.It is used to copy objects and to create the shortcuts.WorkspaceWorkspace is a space where we do the coding.In a workspace, we can create as well as edit the repository objects such as sources, targets, mapplets, mappings, and transformations.ToolbarDifferent components in the toolbar are available such as Repository, edit, tools, versioning, windows, and help.Output/Control panelIt displays the output about the task that we perform in designer such as mapping is valid or not, is mapping saved or not, and it also displays the errors.Status barThe status bar displays the status of the current operation.14) What is domain?The domain is a collection of nodes (machines) and services, i.e., repository service, integration service, nodes, etc.It is an administrative unit from where you manage or control things such as configurations, users, security.The domain is an environment where you can have a single domain as well as multiple domains. For example, we have three departments such as development, test, and production; then we will have a domain for each department, i.e., we have three domains.15) What is Workflow Manager?Workflow Manager is used to create Workflow and Worklet.WorkflowWorkflow is a set of instructions used to execute the mappings.The workflow contains various tasks such as session task, command task, event wait task, email task, etc. which are used to execute the sessions.It is also used to schedule the mappings.All the tasks are connected to each other through links inside a workflow.After creating the workflow, we can execute the workflow in the workflow manager and monitor its progress through the workflow monitor.WorkletWorklet is an object that groups a set of tasks which can be reused in multiple workflows.A worklet is similar to a workflow, but it does not have any scheduling information.In worklet, you can group the tasks in a single place so that it can be easily identified.16) What is Workflow Monitor?Workflow Monitor is used to monitor the execution of workflows or the tasks available in the workflow. It is mainly used to monitor the progress of activities such as Event log information, a list of executed workflows, and their execution time.Workflow Monitor can be used to perform the following activities:You can see the details of executionYou can see the history of workflow executionYou can stop, abort, or restart the workflows.It displays the workflows that have been executed at least once.It consists of the following windows:Navigator window: It displays the repositories, servers, and repositories objects that have been monitored.Output window: It displays messages coming from the Integration service and Repository service.Time window: It displays the progress of workflow execution.Gantt Chart view: It displays the progress of the workflow execution in a tabulated form.Task view: It displays the details about the workflow execution in a report format.17) Explain the types of transformations?Transformations are used to transform the source data into target data. It ensures that the data will be loaded to the target database based on the requirements of the target system.A transformation is basically a repository object that can read, modify, and passes the data from source to the target.There are two types of transformations:Active transformationActive transformation is a transformation which can modify the number of rows that passes from source to the target, i.e., it can eliminate the rows that do not meet the condition in transformation.Passive transformationPassive transformation is a transformation that does not eliminate the number of rows, i.e., all the data passes from source to the target without any modification.18) What is SQ transformation?SQ stands for Source Qualifier transformation that selects the records from multiple sources, and the sources can be relational tables, flat files, and Informatica PowerExchange services.It is an active and connected transformation.When you add the source tables in mapping, then Source Qualifier is added automatically.It displays the transformation types, i.e., it converts the source datatypes into an Informatica compatible datatypes.In the case of SQ transformation datatypes, source datatype does not match with the Informatica compatible datatype then the mapping will become invalid when you save it.SQ transformation is an active transformation as you can apply all the business rules and filters to overcome the performance issue.By using SQ transformation, you can apply filters on the data by applying joins on the tables.Source Qualifier transformation can also join homogeneous tables, i.e., data originating from the same database into a single SQ transformation.Following are the properties of SQ transformation:User Defined SQL QueryUser Defined JoinsAdd/Modify WHERE clause using FilterAdd/Modify ORDER BY sorted portsSelect Unique/Distinct rows19) What is an Expression Transformation?Expression Transformation is a passive and connected transformation.It is used to manipulate the values in a single row.Examples of expression transformation are concatenating the first name and last name, adjusting the student records, converting strings to date, etc.It also checks the conditional statements before passing the data to other transformations.Expression transformation uses numeric and logical operatorsFollowing are the operations performed by the expression transformer are:Data manipulationIt performs operations such as concatenation, truncation, and round.Datatype conversionIt can also convert one data type into another data type.Data cleansingIt checks for nulls, test for spaces, test for numbers.Manipulate datesIt can also manipulate the dates.Scientific calculations and numerical operationsIt also performs the exponential, log, modulus, and power operations.There are three types of ports used in Expression Transformation:InputAn input port consists of values which are used in the calculation. For example, we need to calculate the total salary; then it will be calculated only when we know the salary and incentives of an employee.OutputWe provide expression to each output port, and the return value of the output port should match the return value of the expression.VariableIt is a temporary variable used in the calculation.20) What is a Sorter Transformation?It is an active and connected transformation.It is used to sort the data either in ascending or in descending order, similar to the ORDER BY clause in SQL.It is also used in case-sensitive sorting, and also used to specify whether the output rows should be distinct or not.Sorter transformation is an active transformation as it eliminates duplicates.Properties of Sorter Transformation:Sorter cache sizeAn integration service uses sorter cache size property to determine the maximum amount of memory required to perform the sort operation.Case SensitiveYou can also enable the case-sensitive property; in such case, an integration service will give more priority to the uppercase characters than lowercase characters.Work directoryWork directory is a directory where integration service creates temporary files while sorting the data. When data is sorted, then all the temporary files will be removed from the work directory.Distinct Output RowsThis property is used by the integration service to produce the distinct output rows.Tracing LevelThe tracing level is a property that controls the number, type of sorter error, and status messages that integration service writes to the session log.Null Treated LowEnable this property when you want integration service treat null value lower than other value. Disable this property when you want to treat the null value higher than the other value.21) What is an Aggregator Transformation?Aggregator transformation is a connected and active transformation.It is used to perform aggregate functions over a group of rows such as sum, average, count, etc., similar to the aggregate functions in SQL such as sum(), avg(), count(), etc.For example, if you want to calculate the sum of the salary of all the employees, then an aggregator transformation is used.Aggregator transformation uses the temporary main table to store all the records, and perform the calculations.Components of Aggregator transformation:Aggregate cacheAn integration service uses the aggregate cache to store the data until the aggregate calculation is completed. It stores the group values in index cache and row data in the data cache.Aggregate expressionAn aggregate expression is provided to the output port and output port can also contain non-aggregate expressions and conditional clauses.Group by portThis property is used to create the groups. Groups can be input, output, or any variable port.Sorted inputSorted input property is used to improve the session performance. In order to use sorted input, you need to pass the data to aggregator transformation sorted by group by port either in ascending or in descending order.22) What is a Filter Transformation?Filter transformation is an active and connected transformation.It filters out the rows which are passed through it, i.e., it changes the number of rows which are passed through.It applies the filter condition on the group of data. This filter condition returns an either true or false value. If the value is true means that the condition is satisfied, then data is passed through, and if the value is false means that the filter condition is not satisfied, then integration service drops the data and writes the message to the session log.23) What is a Joiner Transformation?Joiner Transformation is an active and connected transformation.It allows you to create the joins in Informatica, similar to the joins that we create in database.In joiner transformation, joins are used for two sources and these sources are:Master sourceDetail sourceIn joiner transformation, you need to choose which data source will be Master, and which data source will be Detail.There are four types of joins used in a joiner transformation:Master outer joinIn Master outer join, the resultset contains all the records from the Detail source and the matching rows in the master source. This join will be similar to the Right join in SQL.Detail outer joinIn Detail outer join, the resultset contains all the records from the Master source and the matching rows in the Detail source. This join will be similar to the Left join in SQL.Full outer joinIn Full outer join, the resultset contains all the records from both the sources, i.e., Master and Detail source.Normal joinIn Normal join, the resultset contains only the matching rows between Master and Detail source. This join is similar to the inner join in SQL.24) What is a Router Transformation?Router transformation is an active and connected transformation.Router transformation is similar to the filter transformation as both the transformations test the input data based on the filters.In Filter transformation, you can apply only one filter or condition, and if the condition is not satisfied, then a particular is dropped. But in Router transformation, more than one condition can be applied. Therefore, we can say that the single input data can be checked on multiple conditions.25) What is Rank Transform?Rank transformation is an active and connected transformation.It filters the data based on groups and ranks.For example, if you want to get top 3 salaried employees department wise, then this will be achieved by the rank transformation.Rank transform contains an output port which assigns a rank to the rows.26) What is a Sequence Generator Transformation?Sequence Generator transformation is a passive and connected transformation.It is a type of transformation that generates numeric values.It creates unique primary key values, replaces missing primary keys, or cycle through a sequential range of numbers.27) What is a Stored Procedure Transformation?It is a passive transformation.It can be used in both connected and unconnected mode.Informatica contains the stored procedure transformation which is used to run the stored procedures in the database where stored procedures are pre-compiled PL-SQL statements, and these pre-compiled statements are executed using Execute or Call statements.There are three types of data that can be passed between the integration service and stored procedure:Input/Output parametersIt is used to send or receive the data from the stored procedure.Return valuesOn running a stored procedure, it returns a single value, and the value can be user-definable, single output value or only a single integer value. If the stored procedure returns resultset, then stored transformation accepts the only first value of the resultset.Status codesStored procedure transformation provides a status code that notifies whether the stored procedure has been completed successfully or not.28) What is lookup Transformation?Lookup transformation is active as well as passive transformation.It can be used in both connected and unconnected mode.It is used to look up the data in a source, source qualifier, flat file, or a relational table to retrieve the data.We can import the definition of lookup from any flat file or relational database, and an integration service queries the lookup source based on the ports, lookup condition and then returns the result to other transformations.Lookup transformation can be in two modes:Lookup tableThe Lookup table is imported either from the mapping source or target database using the Informatica client and server.Lookup conditionLookup condition determines whether the input data satisfies the value in the lookup table or not.The following are the activities performed by the lookup transformation:Get a Related valueIt can be used to retrieve the value from the lookup table based on the value available in the source table. For example, we want to retrieve the student name from the lookup table based on the student id in the source table.Get multiple valuesIt can also be used to retrieve the multiple rows from a lookup table. For example, we want to retrieve all the students branch wise.Perform a calculationIt can used to retrieve the value from a lookup table and can perform the calculation on it. For example, retrieve the marks of students and then calculate their percentages.Update slowly changing dimension tablesIt determines whether the rows exist in a target table or not.29) What is Union transformation?Union transformation is an active transformation.It is similar to the SQL Union All, i.e., it combines the data from the multiple files and produces the single output and then store it in the target table.Guidelines of Union transformationUnion transformation contains multiple output groups, but one input group.It does not remove duplicates from the input source. To overcome this issue, we use the sorter transformation in which we use the select distinct statement to remove the duplicate rows.It does not generate any transaction.You cannot connect to a sequence generator transformation to generate the sequences.30) What is Update Strategy transform?It is an active and connected transformation.This type of transformation can be used to insert, update, or delete the records from the target table.It can even reject all the records to avoid reaching to the target tableThe design of the target table depends on how the changes are made in the existing row. An update strategy transform works in two levels:Session levelWhen we configure the session, we can either instruct the integration service to treat all the rows in the same way, i.e., treat all rows as insert/delete/update, or you can use the instruction coded in session mapping to flag rows for performing different database operations.Mapping levelWithin mapping, you can apply update strategy transformation to flag rows either for insert, update, delete, or reject.31) What are the tasks that can be performed using SQ?The following are the tasks performed by using SQ:JoinsYou can join two or more tables belonging to the same database, and by default, all the tables are joined by using the primary key and foreign key relationship. We can also explicitly specify the join condition in user-defined join property.Filter rowsYou can also filter the rows. An integration service adds a WHERE clause to filter the rows.Sorting inputYou can also sort the input data by specifying the numbers of sorted inputs. An integration service uses the ORDER BY clause by default to sort the input data.Distinct rowsYou can also select the distinct rows from the source table by selecting the Select Distinct property; then integration service will add the Select Distinct statement to the default SQL query.Custom SQL QueryYou can also write your own queries to perform the calculations on the source data.
More detailsPublished - Tue, 06 Dec 2022
Fri, 16 Jun 2023
Fri, 16 Jun 2023
Fri, 16 Jun 2023
Write a public review