Trending February 2024 # Understanding Static And Dynamic Data # Suggested March 2024 # Top 4 Popular

You are reading the article Understanding Static And Dynamic Data updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Understanding Static And Dynamic Data

Data collection practices receive increasingly more attention and sophistication. Web scraping, and automated acquisition processes in general, changed the nature of data collection so much that old challenges were solved and new problems emerged.

One of them is the selection of data in regards to dynamicity. Since now we’re able to collect unthinkable volumes of information in mere seconds, getting some particular sample is no longer an issue. Additionally, in business, we will often scour the same sources over and over to monitor competition, brands, and anything else that’s relevant to the industry.

Data dynamicity is, as such, a question of optimization. Refreshing data each time might not be necessary in cases where certain fields might not be updated frequently, or those changes might have no importance to the use case.

Static vs dynamic data

Static data can be defined in a two-fold manner. As an object of information, it’s one that doesn’t change (frequently). Examples of such sources could be editorial articles, country or city names, descriptions of events and locations, etc. A factual news report, once published, is unlikely to ever be changed in the future.

Dynamic data, on the other hand, is something that is constantly in flux, often due to external factors. Frequently encountered types of dynamic data might be product pricing, stock numbers, reservation counts, etc.

Somewhere in the middle lies the twilight zone of both definitions, as is the case when you try to put everything into neat little boxes. There are objects of information such as product descriptions, meta titles of articles, and commercial pieces of content that change with some frequency.

Whether these fall under static or dynamic data will depend upon the intended use. Projects, independently from the type of data, will have more or less use for specific informational sources. SEO tools, for example, might find less value in pricing data, but will want to refresh meta titles, descriptions, and many other features.

Pricing models, on the other hand, will scarcely have use for frequently-updated product descriptions. They might need to grab it once for product-matching purposes. If it gets updated for SEO purposes down the line, there’s still no reason to ever revisit the description.

Mapping out your data

Every data analysis and collection project will have its necessities. Going back to the pricing model example, two technical features will be necessary – product matching and pricing data.

Products need to be matched as any automated pricing implementation needs accuracy. Mismatching products and changing pricing could cause an enormous amount of damage to revenue, especially if the changes go unaddressed.

Most of the matching happens through product titles, descriptions, and specifications. The former two will change often, especially in ecommerce platforms, where optimizing for keywords is an important ranking factor. They, however, will have no impact on the ability to match product identities as fundamental features will not change (e.g., an iPhone will always remain an iPhone).

As such, descriptions and titles might be treated as static data, even if they are somewhat dynamic. For the project’s purposes, the changes are not nearly as impactful to warrant continued monitoring.

Pricing data, as it may already be obvious, is not only naturally constantly in flux, but catching any changes as they happen would be essential to the project. As such, it would certainly be considered dynamic data.

Reducing costs with mapping

Regardless of the integration method, whether internal or external, data collection and storage practices are costly. Additionally, most companies will use cloud-based storage solutions, which can include all writes into the overall cost, meaning that refreshing data will cut into the budget.

Mapping out data types (i.e., static or dynamic) can optimize data collection processes through several avenues. First, pages can be categorized into static-data, dynamic-data, or mixed. While the first category might be somewhat shallow, it would still indicate that there’s no need to revisit those pages frequently, if at all.

Mixed pages might also make it easier to reduce write and storage costs. Reducing the amount of data transferred from one place to another is, by itself, a form of optimization, but these become more relevant when bandwidth, read/write, and storage costs are taken into account.

Since, however, scrapers usually download the entire HTML, any visit to a URL will have the entire object stored in memory. With the use of external providers, costs are usually allocated per request, so there’s no difference between updating all data fields or only the dynamic ones.

Yet, in some applications, historical data might be necessary. Downloading and updating the same field with the same data every time period would run up write and storage costs without good reason. A simple comparison function can be implemented that checks whether anything has changed and only performs a write if it has been so.

Finally, with internal scraping pipelines, all of the above still applies, however, to a much greater degree. Costs can be optimized by reducing unnecessary scrapes, limiting the amount of writes, and parsing only the necessary parts of the HTML.

In the end, developing frameworks is taking the first step towards true optimization. They may start out, as this one may be, as overly theoretical, but frameworks give us a lens for interpreting processes that are already in place.


You're reading Understanding Static And Dynamic Data

Understanding Excel Vba Data Types (Variables And Constants)

In Excel VBA, you would often be required to use variables and constants.

When working with VBA, a variable is a location in your computer’s memory where you can store data. The type of data you can store in a variable would depend on the data type of the variable.

For example, if you want to store integers in a variable, your data type would be ‘Integer’ and if you want to store text then your data type would be ‘String’.

More on data types later in this tutorial.

While a variable’s value changes when the code is in progress, a constant holds a value that never changes. As a good coding practice, you should define the data type of both – variable and constant.

When you code in VBA, you would need variables that you can use to hold a value.

The benefit of using a variable is that you can change the value of the variable within the code and continue to use it in the code.

For example, below is a code that adds the first 10 positive numbers and then displays the result in a message box:

Sub AddFirstTenNumbers() Dim Var As Integer Dim i As Integer Dim k as Integer For i = 1 To 10 k = k + i Next i MsgBox k End Sub

There are three variables in the above code – Var, i, and k.

The above code uses a For Next loop where all these three variables are changed as the loops are completed.

The usefulness of a variable lies in the fact that it can be changed while your code is in progress.

Below are some rules to keep in mind when naming the variables in VBA:

You can use alphabets, numbers, and punctuations, but the first number must be an alphabet.

You can not use space or period in the variable name. However, you can use an underscore character to make the variable names more readable (such as Interest_Rate)

You can not use special characters (#, $, %, &, or !) in variable names

VBA doesn’t distinguish between the case in the variable name. So ‘InterestRate’ and ‘interestrate’ are the same for VBA. You can use mixed case to make the variables more readable.

VBA has some reserved names that you can use for a variable name. For example, you can not use the word ‘Next’ as a variable name, as it’s a reserved name for For Next loop.

Your variable name can be up to 254 characters long.

To make the best use of variables, it’s a good practice to specify the data type of the variable.

The data type you assign to a variable will be dependent on the type of data you want that variable to hold.

Below is a table that shows all the available data types you can use in Excel VBA:

Data Type Bytes Used Range of Values

Byte 1 byte 0 to 255

Boolean 2 bytes True or False

Integer 2 bytes -32,768 to 32,767

Long (long integer) 4 bytes -2,147,483,648 to 2,147,483,647

Single 4 bytes -3.402823E38 to -1.401298E-45 for negative values; 1.401298E-45 to 3.402823E38 for positive values

Double 8 bytes -1.79769313486231E308 to-4.94065645841247E-324 for negative values; 4.94065645841247E-324 to 1.79769313486232E308 for positive values

Currency 8 bytes -922,337,203,685,477.5808 to 922,337,203,685,477.5807

Decimal 14 bytes +/-79,228,162,514,264,337,593,543,950,335 with no decimal point;+/-7.9228162514264337593543950335 with 28 places to the right of the decimal

Date 8 bytes January 1, 100 to December 31, 9999

Object 4 bytes Any Object reference

String (variable-length) 10 bytes + string length 0 to approximately 2 billion

String (fixed-length) Length of string 1 to approximately 65,400

Variant (with numbers) 16 bytes Any numeric value up to the range of a Double

Variant (with characters) 22 bytes + string length Same range as for variable-length String

User-defined Varies The range of each element is the same as the range of its data type.

When you specify a data type for a variable in your code, it tells VBA to how to store this variable and how much space to allocate for it.

For example, if you need to use a variable that is meant to hold the month number, you can use the BYTE data type (which can accommodate values from 0 to 255). Since the month number is not going to be above 12, this will work fine and also reserve less memory for this variable.

On the contrary, if you need a variable to store the row numbers in Excel, you need to use a data type that can accommodate a number up to 1048756. So it’s best to use the Long data type.

As a good coding practice, you should declare the data type of variables (or constants) when writing the code. Doing this makes sure that VBA allocates only the specified memory to the variable and this can make your code run faster.

Below is an example where I have declared different data types to different variables:

Sub DeclaringVariables() Dim X As Integer Dim Email As String Dim FirstName As String Dim RowCount As Long Dim TodayDate As Date End Sub

To declare a variable data type, you need to use the DIM statement (which is short for Dimension).

In ‘Dim X as Integer‘, I have declared the variable X as Integer data type.

Now when I use it in my code, VBA would know that X can hold only integer data type.

If I try to assign a value to it which is not an integer, I will get an error (as shown below):

Note: You can also choose to not declare the data type, in which case, VBA automatically considers the variable of the variant data type. A variant data type can accommodate any data type. While this may seem convenient, it’s not a best practice to use variant data type. It tends to take up more memory and can make your VBA code run slower.

While you can code without ever declaring variables, it’s a good practice to do this.

Apart from saving memory and making your code more efficient, declaring variables has another major benefit – it helps trap errors caused by misspelled variable names.

To make sure you’re forced to declare variables, add the following line to the top of your module.

Option Explicit

When you add ‘Option Explicit’, you will be required to declare all the variables before running the code. If there is any variable that has not been declared, VBA would show an error.

There is a huge benefit in using Option Explicit.

Sometimes, you may end up making a typing error and enter a variable name which is incorrect.

Normally, there is no way for VBA to know whether it’s a mistake or is intentional. However, when you use ‘Option Explicit’, VBA would see the misspelled variable name as a new variable that has not been declared and will show you an error. This will help you identify these misspelled variable names, which can be quite hard to spot in a long code.

Below is an example where using ‘Option Explicit’ identifies the error (which couldn’t have been trapped had I not used ‘Option Explicit’)

Sub CommissionCalc() Dim CommissionRate As Double CommissionRate = 0.1 Else CommissionRtae = 0.05 End If MsgBox "Total Commission: " & Range("A1").Value * CommissionRate End Sub

Note that I have misspelled the word ‘CommissionRate’ once in this code.

If I don’t use Option Explicit, this code would run and give me the wrong total commission value (in case the value in cell A1 is less than 10000).

But if I use Option Explicit at the top of the module, it will not let me run this code before I either correct the misspelled word or declare it as another variable. It will show an error as shown below:

While you can insert the line ‘Option Explicit’ every time you code, here are the steps to make it appear by default:

Check the option – “Require Variable Declaration”.

Once you have enabled this option, whenever you open a new module, VBA would automatically add the line ‘Option Explicit’ to it.

Note: This option will only impact any module you create after this option is enabled. All existing modules are not affected.

So far, we have seen how to declare a variable and assign data types to it.

In this section, I will cover the scope of variables and how you can declare a variable to be used in a subroutine only, in an entire module or in all the modules.

The scope of a variable determines where can the variable be used in VBA,

There are three ways to scope a variable in Excel VBA:

Within a single subroutine (Local variables)

Within a module (Module-level variables)

In all modules (Public variables)

Let’s look at each of these in detail.

When you declare a variable within a subroutine/procedure, then that variable is available only for that subroutine.

You can not use it in other subroutines in the module.

As soon as the subroutine ends, the variable gets deleted and the memory used by it is freed.

In the below example, the variables are declared within the subroutine and would be deleted when this subroutine ends.

When you want a variable to be available for all the procedures in a module, you need to declare it at the top of the module (and not in any subroutine).

Once you declare it at the top of the module, you can use that variable in all the procedures in that module.

In the above example, the variable ‘i’ is declared at the top of the module and is available to be used by all the modules.

Note that when the subroutine ends, the module level variables are not deleted (it retains its value).

Below is an example, where I have two codes. When I run the first procedure and then run the second one, the value of ‘i’ becomes 30 (as it carries the value of 10 from the first procedure)

If you want a variable to be available in all the procedure in the workbook, you need to declare it with the Public keyword (instead of DIM).

The below line of code at the top of the module would make the variable ‘CommissionRate’ available in all the modules in the workbook.

 Public CommissionRate As Double

You can insert the variable declaration (using the Public keyword), in any of the modules (at the top before any procedure).

When you work with local variables, as soon as the procedure ends, the variable would lose its value and would be deleted from VBA’s memory.

In case you want the variable to retain the value, you need to use the Static keyword.

Let me first show you what happens in a normal case.

In the below code, when I run the procedure multiple times, it will show the value 10 everytime.

Sub Procedure1() Dim i As Integer i = i + 10 MsgBox i End Sub

Now if I use the Static keyword instead of DIM, and run the procedure multiple times, it will keep on showing values in increments of 10. This happens as the variable ‘i’ retains its value and uses it in the calculation.

Sub Procedure1() Static i As Integer i = i + 10 MsgBox i End Sub

While variables can change during the code execution, if you want to have fixed values, you can use constants.

A constant allows you to assign a value to a named string that you can use in your code.

The benefit of using a constant is that it makes it easy to write and comprehend code, and also allows you to control all the fixed values from one place.

For example, if you are calculating commissions and the commission rate is 10%, you can create a constant (CommissionRate) and assign the value 0.1 to it.

In future, if the commission rate changes, you just need to make the change at one place instead of manually changing it in the code everywhere.

Below is a code example where I have assigned a value to the constant:

Sub CalculateCommission() Dim CommissionValue As Double Const CommissionRate As Double = 0.1 CommissionValue = Range("A1") * CommissionRate MsgBox CommissionValue End Sub

The following line is used to declare the constant:

Const CommissionRate As Double = 0.1

When declaring constants, you need to start with the keyword ‘Const‘, followed by the name of the constant.

Note that I have specified the data type of the constant as Double in this example. Again, it’s a good practice to specify the data type to make your code run faster and be more efficient.

If you don’t declare the data type, it would be considered as a variant data type.

Just like variables, constants can also have scope based on where and how these are declared:

Within a single subroutine (Local constants): These are available in the subroutine/procedure in which these are declared. As the procedure ends, these constants are deleted from the system’s memory.

Within a module (Module-level constants): These are declared at the top of the module (before any procedure). These are available for all the procedures in the module.

In all modules (Public constants): These are declared using the ‘Public’ keyword, at the top of any module (before any procedure). These are available to all the procedures in all the modules.

You May Also Like the Following VBA Tutorials:

Understanding Ai And Robotic Process Automation

It’s no exaggeration to say that the RPA sector – Robotic Process Automation – is a red hot market. Gartner research identifies RPA as the fastest growing sector in all of technology. Many of the top RPA companies are growing rapidly. 

Yet despite the benefits of RPA, confusion abounds in the sector. The very idea of having software robots working alongside humans is new, and some companies have struggled to get real ROI. The element of artificial intelligence raises myriad questions. To add clarity, in this webinar we’ll discuss:

Why RPA has become the fastest-growing enterprise software category.

How AI technologies are being combined with RPA, and the various impacts.

The most important attributes of intelligent automation technology in today’s market.

The remarkable future of AI and RPA.

Please join this wide-ranging discussion with three key thought leaders in the RPA sector.

Download the podcast:

Kirkwood: “So originally, we went to see Blue Prism, which is a British company, in fact, they founded, they created the RPA market and category. And then we went to see a small band, and I mean small, it was only seven people in Bucharest in Romania for a company called a Desk Over, and realized, the technology they got could actually be applicable, and so that’s what the team used, and the value of that was so great that I had a road. Well in fact, David and I had a ‘Road to Damascus’ moment and realized that this was the future of automation. Desk Over became UiPath, David set up an organization to do the implementation of RPA, I joined UiPath originally as Chief Operating Officer, and now it’s Chief Evangelist.”

“Why is it moving so fast? A, it’s luck, being in the right place at the right time, and B, it has the flexibility to allow an organization to do a lot of the transformation stuff. And we’ll be careful about what we say about the value of RPA as a transformational tool, but [companies] find it easier to do the transformation stuff, using RPA or at least, make their operations more efficient.”

Poole: “What happened with RPA was the ability to have a step change, where you could get much bigger savings for the right processes, not every process, but the right processes, you could get really big step changes. And the other reason I think an RPA is been so successful is that it’s what I call a generic tool, you can use it pretty much on any process that’s applicable for automation, you can use it in any industry and its application is really broad, you can use it, literally, anywhere that a human would be doing a manual process, you can apply RPA.”

“And I think that’s quite unique and even today as we look at the artificial intelligence space, there’s not any tool that is as broad as an RPA tool in terms of its application. And for me, that’s been the real power of RPA and the reason it accelerated so quickly, because everybody could find a use case for it, if they tried hard enough.”

Cox: The big change is around how work’s gonna happen, just going forward, so many of the boundaries are fallen down here. There used to be tight boundaries around where people work. Obviously those have gone. We’re all at home now, yeah? There were tight boundaries about when people worked. Guess what, now that we’ve all got assorted other jobs going on in our lives, like looking after our kids, those have changed and actually the boundaries around what people do have been torn up as well, and I think that’s where there’s gonna be a lot of excitement and really rapid activity around RPA and intelligent automation around COVID because as you’ve seen it, teams need to flex all around the world.

Poole: “And I think a lot of organizations have been kicked out of inactivity because there’s nothing like a good crisis to drive change and innovation. So I think we’re in that mode now where the pace of change is important and the scale of investment is also important because people are not gonna be thinking about big EP transformations or big system changes and so on.”

“So really what it comes down to is a lack of imagination as to what the art of the possible is. And I think often we find people just not ambitious enough and I think in the work that we’ve been doing with our clients around resilience and a lot of their discussions have been, “Oh, if only we’d gone a bit further. If only we’d actually completely automated the billing process rather than just part automating the billing process.”

Kirkwood: “We’re seeing an increase in the sales and in the organizations that are adopting [RPA]. What we’re tending to see is that there’s a bifurcation in the market, so those organizations that had already started and implemented RPA and internal automation or what got a called Hyper automation. They’ve seen the value of it, they’ve achieved really good returns on the investment.”

“Those organizations that haven’t started, to Dave’s point about being little more timid, those companies that are particularly more on AI rather than RPA – they haven’t achieved their returns in investment or they’re just starting, they’re putting everything on hold.”

Maguire: “Will the Microsoft purchase of self-demotive help drive RPA into smaller businesses?”

Cox: “Yeah, it’s definitely gonna drive into that. It’s gonna be talking bifurcations, there’s gonna be a really interesting bifurcation in the market around it. There’s those who are deploying RPA for transformational change and looking at the whole services and the whole end-to-end automation spectrum. I’m not sure if the Microsoft purchase is gonna have a big impact on that. But there’s the other RPA use case, “The robot for every person” kinda use case. Which is all about individual productivity and desktop productivity and lesser Microsofts are owned for yet the last 20 years and they’re gonna get real focused on that.”

Kirkwood: “First of all, I think that Microsoft moving into the market is a vindication that we’re moving in the right direction. The RPA market is still nascent, it’s still small, it’s really only taken off in the last four years, so for Microsoft to come in, make a purchase of a self-demotive and then drive into that market is a great… A great, great vindication of the potential.”

Maguire: Have there been some serious hold outs among the IT crowd, and people have looked at RPA and said, “This is not proper technology?”

Kirkwood: “Oh, absolutely yeah, I went to a Gartner conference in March 2023. The audience was fairly evenly split between those who were awfully ignorant about RPA, and most were just actively hostile. And because, why on earth use this RPA nonsense when you can do it properly using API’s?”

“Being the number one trend for 2023, that’s a massive turn-around.”

Cox: “I think there has been a lot of confusion around how AI works with RPA, and in most cases the RPA vendors in the marketplace…they’re  trying to make sure that you can integrate with the big AI providers, whether that’s AWS or Microsoft or Google.”

“So, you think about most of the AI tools that we talk about are owned by AWS, Microsoft, or Google. And really, you can use any of those things interchangeably, they are microservices that you attach to any tool that you can integrate simply with very simple coding or virtually no coding. And, RPA tools just need to be able to, one way or another, connect with those capabilities.”

“So, basically… I think wherever in the world you need to connect different technologies and you need to manage case work between RPA robots doing things which are repeatable and programmable. Any AI tools which are using especially sort of cognitive capabilities from one vendor or another, frankly.”

Kirkwood: “To put it bluntly, I got famous couple of years ago by saying that AI is nonsense, only nonsense wasn’t the word I used.” [laughter]

“So, we try to make a little bit more understandable by using words, understanding. So, around RPA, you need the things that allow you to do, as David said, automate processes end-to-end, because RPA can’t do that. Which is why Kit’s business is been so successful, it’s about orchestration.”

“But the key understandings are; visual understanding, so the system has to understand everything on the screen regardless of where it is on the screen and exactly the same way that you or I do. There is document understanding, ’cause we’re still waiting around in paper. So, it has to understand what that piece of paper is and what to do with it.  Third one is process understanding, that’s really important.”

“And last is conversation understanding. Because increasingly everyone’s using voice rather than keyboards or chatbots.”

“And so that combination, what Gartner call hyper-automation, what everyone else calls intelligent automation is our hyper intelligent automation is the combination of those things.”

COX: “So, I see a couple of things. There’s definitely gonna be more consolidation. The really important parts of the future over the next 15, 18 months, are gonna be all about pace, and really focus on rapidity to live and speed to get live.”

“So I was talking to some economists the other week about… And they produced a report that said across Europe that SMEs, mid-market, big companies, most big companies got an AI strategy. But almost no small business has an AI strategy. And my response to that was, “Well, so my girlfriend’s in a gardening business.” Yeah? She doesn’t have an AI strategy. But she does have a tool that she gives to all of her customers that they can take a photo of a plant and it uses AI to tell them what the plant is. And that’s not having an AI strategy, but that simplification makes it consumable, yeah? And that’s, yeah. I think taking AI out of the hands of the geeks and into the hands of the community. That’s where RPA’s gonna be really useful.”

Maguire: The democratization of AI and RPA.

Poole: “I am convinced that we’re gonna see the uprising of the sitting developer, so the non-technical developer that knows nothing about technology but knows about process, knows what they want, but knows nothing about technology. And I’m not sure any of the RPA firms yet are thinking about that, but I think that that is gonna be absolutely paramount here.”

“It’s not ‘do they have an AI strategy?’ They don’t know strategy. They really do not know where they’re headed. And unless you really know where you’re headed and you know what you want to achieve with technology in general, be it RPA or any other technology, you cannot be effective as an organization. And you cannot be investing wisely, because you don’t know where you’re headed. I mean, it makes sense that if you don’t know where you’re headed, you don’t know what you should be investing in.”

“So I think there’s a lack of strategy as a whole, and that’s what we’re really focused on and helping companies find their technology path through that. And at the end of the day, you’re not necessarily launching huge programs, but thinking about how do you use tools like RPA and orchestration tools to help build, in the next six months, “Can I build something or learn something, or do something that helps me learn about my future state that I’m looking for in my business?”

Kirkwood: “We had a plethora of ERP vendors, but it eventually boiled down to two, which was SAP and Oracle. I think exactly the same thing’s gonna happen within the RPA market over the next two to three years.”

“We thought that the TAM, the total addressable market was large enough to support four or five RPA vendors, because as David said, that there isn’t anywhere that RPA is not applicable if you do it properly. And [the IDC analyst] said, “No, it’s just two. There’s just you, or UiPath and Automation Anywhere, at the moment.” I think Microsoft will come in. They got a magic product that’s coming out, and I can’t tell you what the results are yet, but Microsoft is not a leader.”

“The other thing that I think is gonna happen, and I’ve been predicting this for quite awhile, is that ultimately, probably slightly longer term, all of the stuff that we’ve been describing today will disappear. It will just disappear, not because it’s not gonna get used, but because it’s gonna get used everywhere.”

“That democratization, that simplicity of use, the fact is embedded in just the way in AI, in the plant-identification that Kit mentioned. It will just become a natural part of our day-to-day lives at work and home. And I don’t think there’ll be any indication. So when we talk about a robot for every person, at the moment there’s a robot for every employee. But actually there isn’t any reason why it couldn’t be a robot for every person. Automating bits of your life at home for family and so on that you wanted to.”

“So I think that as organizations recognize that their world is radically changed, not only as a result of COVID but actually being accelerated by COVID, their digital transformation is critical. What you need, as David said, is a strategy to start with, and then RPA and intelligent automation is a tool or tools for that strategy. There is nothing spectacularly special or magical about RPA, in the same way that there is nothing special or magical about shared services and outsourcing 20 years ago. But it just makes things easier and more efficient.”

Maguire: Do you believe RPA is the best technology for companies to get started within the world of automation?

Cox: So I’m gonna say there are two routes that we see have been really successful in, and it entirely depends what your organization looks like at the point that you get going. The most important thing to do is just do something to get going, full stop, because otherwise you can just be paralyzed by, What should I do? Where shall I start?”

“It was, if you’re well-controlled, well-orchestrated across the human workforce, then you can go straight into RPA and other automation technologies ’cause you’ve got the clarity of what’s happening there. If you’re not well-controlled and managed across the human workforce then start with orchestration process mining, task mining to get that side under control ’cause then your automation will be much more successful and much more rapid.”

Maguire: is RPA going write its own RPA?

Kirkwood: “Well, it is gonna write its own RPA, but will only follow the rules that are set for it by humans.”

“Ultimately using machine learning, using other technologies, it will work out what the optimum path is through that route automatically and then write the code for that that will become the robot. So in other words, the person won’t have to do anything but they’ll just do their normal work and the system will then say… The assistant will then say, “Okay, I can automate this process for you now.” That’s self-building robots.”

Static Methods In Javascript Classes?

This article discusses about the static methods in JavaScript classes with suitable exmaplein JavaScript.

The “static” is a keyword that is used for a class, method, or a property. Static method is a method that is directly a member of a class rather than a part of instance of class.

Using a static method, we can invoke a method directly from a class, instead of creating an instance of a class. A class can have any number of static methods and static variables.

Let’s understand this concept better with the help of suitable examples further in this aricle.


The syntax for static method is −

Static methodName(){}; Example 1

This is an example program to illustrate the invoking of static and non-static methods.

class Sample { static show1() { return “static method is invoked”; } show2() { return “non-static method is invoked”; } } var obj = new Sample();

On executing the above code, the below output is generated.

Example 2

This is an example program to show how more than one static method works with same name and same parameters.

class Sample { static show1(a,b) { return ‘Addition is performed : ‘+(a+b); } static show1(a,b){ return “Multiplication is performed : “+(a*b); } } document.getElementById(‘static’).innerHTML = Sample.show1(10,20);

On executing the above code, the below output is generated.

Example 3

This is an example program to invoke a static method inside a non-static method.

class Sample { static example1() { return “Referring to a static method.”; } example2() { return Sample.example1(); } } var sample1 = new Sample(); document.getElementById(‘static’).innerHTML = sample1.example2();

On executing the above code, the below output is generated.

Example 4

This is an example program on how more than one static method can be invoked and how a static method can be called within another static method.

class Sample { static display1() { return “static method 1 is invoked”; } static display2() { return “static method 2 is invoked”; } static display3() { return this.display2()+” is a function from another static function”; } }

On executing the above code, the below output is generated.

How To Use Automatic Frame Rate And Dynamic Range Switching On Apple Tv

tvOS 11.2 lets your Apple TV 4K automatically switch video display modes in order to match a video’s dynamic range and/or native frame rate. Owners of the fourth-generation model can use only frame rate matching, provided they’re on the tvOS 11.3 software or later.

Buy default, Apple TV 4K enforces video modes with the highest refresh rates. If your TV supports HDR10 at a 60Hz refresh rate and Dolby Vision at 30Hz, HDR10 will be picked over Dolby Vision even though the latter looks better.

You may have noticed that processing everything in 4K HDR yields poor results when watching non-4K content. Wouldn’t it be great if Apple TV could automatically switch video modes on your telly depending on content being watched?

With tvOS 11.2 and later, you can.

You can set your Apple TV 4K to automatically switch video display modes to match the native frame rate and dynamic range of content being watched rather than go with the highest capability of your TV set.

For instance, the device will use HDR10 at 60Hz when viewing HDR content and switch to Dolby Vision at 30Hz when viewing content that supports Dolby Vision.

Here’s how to use these content matching options in tvOS.

How to use content matching on Apple TV

By default, tvOS uses your selected display format to play content without alteration.

To set your Apple TV to switch display modes automatically, matching the video’s dynamic range and frame rate, do the following:

1) Open the Settings app on your Apple TV.

2) Select the section Video and Audio.

3) Select the sub-section Match Content.

4) Enable the following options:

Match Dynamic Range—Only available on Apple TV 4K, this setting forces the device to match its video display mode to the dynamic range of the video being played.

Match Frame Rate—Available on both the fourth-generation Apple TV and Apple TV 4K, this forces the device to match its display refresh rate to the content’s original frame rate. It applies to content mastered at different frame rates, for example 24FPS film-based content or other international content.

With the Match Frame Rate option enabled, Apple TV will match the frame rate of video content encoded at 60, 50, 30, 25, and 24 FPS.

Frame rates are matched to the refresh rates that are appropriate for your region (i.e. 29.97 FPS for NTSC). Videos encoded at 25FPS/30FPS are frame-doubled to display at 50Hz/60Hz which matches their original appearance while preserving a fluid user interface.

When Match Dynamic Range is on, Apple TV 4K automatically switches its display output to SDR when using an app that hasn’t been optimized for HDR yet.

AVKit-enabled apps already do this automatically. Older apps may need to be updated with support for the AVKit framework in order to enable the content matching features in tvOS.

Your Apple TV can match a video’s frame rate and dynamic range automatically

Begin by selecting one of the video modes listed underneath the Unverified Formats heading. This will run a short display test and verify that the selected mode displays correctly on your TV. Once verified, your Apple TV can switch to that video mode when needed.

Having trouble with the selected video mode?

If so, set your Apple TV to use the display mode that’s compatible with your TV set by choosing the Reset Video Settings option in Settings → Video and Audio.

Need help? Ask iDB!

Understanding Loss Function In Deep Learning

The loss function is very important in machine learning or deep learning. let’s say you are working on any problem and you have trained a machine learning model on the dataset and are ready to put it in front of your client. But how can you be sure that this model will give the optimum result? Is there a metric or a technique that will help you quickly evaluate your model on the dataset? Yes, here loss functions come into play in machine learning or deep learning. In this article, we will explain everything about loss function in Deep Learning.

This article was published as a part of the Data Science Blogathon.

What is Loss Function in Deep Learning?

In mathematical optimization and decision theory, a loss or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some “cost” associated with the event.

In simple terms, the Loss function is a method of evaluating how well your algorithm is modeling your dataset. It is a mathematical function of the parameters of the machine learning algorithm.

In simple linear regression, prediction is calculated using slope(m) and intercept(b). the loss function for this is the (Yi – Yihat)^2 i.e loss function is the function of slope and intercept.

Why Loss Function in Deep Learning is Important?

Famous author Peter Druker says You can’t improve what you can’t measure. That’s why the loss function comes into the picture to evaluate how well your algorithm is modeling your dataset.

if the value of the loss function is lower then it’s a good model otherwise, we have to change the parameter of the model and minimize the loss.

Cost Function vs Loss Function in Deep Learning

Most people confuse loss function and cost function. let’s understand what is loss function and cost function. Cost function and Loss function are synonymous and used interchangeably but they are different.

Loss FunctionCost FunctionMeasures the error between predicted and actual values in a machine learning model.Quantifies the overall cost or error of the model on the entire training chúng tôi to optimize the model during chúng tôi to guide the optimization process by minimizing the cost or chúng tôi be specific to individual samples.Aggregates the loss values over the entire training set.Examples include mean squared error (MSE), mean absolute error (MAE), and binary cross-entropy.Often the average or sum of individual loss values in the training chúng tôi to evaluate model chúng tôi to determine the direction and magnitude of parameter updates during optimization.Different loss functions can be used for different tasks or problem domains.Typically derived from the loss function, but can include additional regularization terms or other considerations.

Loss Function in Deep Learning


MSE(Mean Squared Error)

MAE(Mean Absolute Error)

Hubber loss


Binary cross-entropy

Categorical cross-entropy


KL Divergence


Discriminator loss

Minmax GAN loss

Object detection

Focal loss

Word embeddings

Triplet loss

In this article, we will understand regression loss and classification loss.

A. Regression Loss 1. Mean Squared Error/Squared loss/ L2 loss

The Mean Squared Error (MSE) is the simplest and most common loss function. To calculate the MSE, you take the difference between the actual value and model prediction, square it, and average it across the whole dataset.


1. Easy to interpret.

2. Always differential because of the square.

3. Only one local minima.

1. Error unit in the square. because the unit in the square is not understood properly.

2. Not robust to outlier

Note – In regression at the last neuron use linear activation function.

2. Mean Absolute Error/ L1 loss

The Mean Absolute Error (MAE) is also the simplest loss function. To calculate the MAE, you take the difference between the actual value and model prediction and average it across the whole dataset.


1. Intuitive and easy

2. Error Unit Same as the output column.

3. Robust to outlier

1. Graph, not differential. we can not use gradient descent directly, then we can subgradient calculation.

Note – In regression at the last neuron use linear activation function.

3. Huber Loss

In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss.

n – the number of data points.

y – the actual value of the data point. Also known as true value.

ŷ – the predicted value of the data point. This value is returned by the model.

δ – defines the point where the Huber loss function transitions from a quadratic to linear.


Robust to outlier

It lies between MAE and MSE.

B. Classification Loss 1. Binary Cross Entropy/log loss

It is used in binary classification problems like two classes. example a person has covid or not or my article gets popular or not.

Binary cross entropy compares each of the predicted probabilities to the actual class output which can be either 0 or 1. It then calculates the score that penalizes the probabilities based on the distance from the expected value. That means how close or far from the actual value.

yi – actual values

yihat – Neural Network prediction

Advantage –

A cost function is a differential.

Multiple local minima

Not intuitive

Note – In classification at last neuron use sigmoid activation function.

2. Categorical Cross Entropy

Categorical Cross entropy is used for Multiclass classification and softmax regression.

loss function = -sum up to k(yjlagyjhat) where k is classes

cost function = -1/n(sum upto n(sum j to k (yijloghijhat))


k is classes,

y = actual value

yhat – Neural Network prediction

Note – In multi-class classification at the last neuron use the softmax activation function.

if problem statement have 3 classes

softmax activation – f(z) = ez1/(ez1+ez2+ez3)

When to use categorical cross-entropy and sparse categorical cross-entropy?

If target column has One hot encode to classes like 0 0 1, 0 1 0, 1 0 0 then use categorical cross-entropy. and if the target column has Numerical encoding to classes like 1,2,3,4….n then use sparse categorical cross-entropy.

Which is Faster?

sparse categorical cross-entropy faster than categorical cross-entropy.


In this article, we learned about different types of loss functions. The key takeaways from the article are:

We learned the importance of loss function in deep learning.

Difference between loss and cost.

The mean absolute error is robust to the outlier.

This function is used for binary classification.

Sparse categorical cross-entropy is faster than categorical cross-entropy.

So, this was all about loss functions in deep learning. Hope you liked the article.

Frequently Asked Questions

Q1. What is a loss function?

A. A loss function is a mathematical function that quantifies the difference between predicted and actual values in a machine learning model. It measures the model’s performance and guides the optimization process by providing feedback on how well it fits the data.

Q2. What is loss and cost function in deep learning?

A. In deep learning, “loss function” and “cost function” are often used interchangeably. They both refer to the same concept of a function that calculates the error or discrepancy between predicted and actual values. The cost or loss function is minimized during the model’s training process to improve accuracy.

Q3. What is L1 loss function in deep learning?

A. L1 loss function, also known as the mean absolute error (MAE), is commonly used in deep learning. It calculates the absolute difference between predicted and actual values. L1 loss is robust to outliers but does not penalize larger errors as strongly as other loss functions like L2 loss.

Q4. What is loss function in deep learning for NLP?

A. In deep learning for natural language processing (NLP), various loss functions are used depending on the specific task. Common loss functions for tasks like sentiment analysis or text classification include categorical cross-entropy and binary cross-entropy, which measure the difference between predicted and true class labels for classification tasks in NLP.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.


Update the detailed information about Understanding Static And Dynamic Data on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!