## function | programmerah

Requirement: when responding to an error, the rendering will display the interface with error information preparation: create error information interfaces 400. Vue and 500. Vue in content note: pay attention to the level of routing where the error interface needs to be placed (Im under children)

When the project uses easyUI, the foreground reports an error, type error: A is undefined. I see that the error location is actually the source code of JQ Its impossible. I searched a lot of information on the Internet, but I didnt solve the problem. After nearly a months thinking, I seem to have found the cause of this problem. It may not be right. It only represents personal profile. If there is a positive solution, please let me know. Thank you.

Note that this code is not put in any function, nor is it wrapped with \$(function () {}). Just this sentence is put directly in JS code. This is all the cases that report this error in my project. My preliminary analysis is because JS thinks that there is no method to call this sentence, so it reports an error.

How can we measure the difference between a result and the actual calculation? One idea is that if the result is closer, the difference will be smaller, otherwise, it will be larger. This function provides such an idea. If the calculated value is closer to 1, then it means that it is closer to the world result, otherwise, it is farther away. Therefore, this function can be used as the loss function of logistic regression classifier. If all the results are close to the result value, then The closer it is to 0. If the result is close to 0 after all the samples are calculated, it means that the calculated result is very close to the actual result.

This is a simple composite function, as shown in the figure above. C is a function of a and E is a function of C. if we use the chain derivation rule to derive a and B respectively, then we will find out the derivative of e to C and C to a, multiply it, find out the derivative of e to C and D respectively, find out the derivative of C and D to B respectively, and then add it up, One of the problems is that in the process of solving, e calculates the derivative of C twice. If the equation is particularly complex, then the amount of calculation becomes very large. How can we only calculate the derivative once?

Next, continue to calculate the value of the sub unit, and save the partial derivatives of the sub unit; multiply all the partial derivatives of the path from the last sub unit to the root node, that is, the partial derivatives of the function to this variable. The essence of calculation is from top to bottom. When calculating, save the value and multiply it to the following unit, so that the partial derivatives of each path only need to be calculated once, from top to bottom All the partial derivatives are obtained by calculating them from top to bottom.

In fact, BP (back propagation algorithm) is calculated in this way. If there is a three-layer neural network with input layer, hidden layer and output layer, we can calculate the partial derivative of the weight of the loss function. It is a complex composite function. If we first calculate the partial derivative of the weight of the first layer, and then calculate the partial derivative of the weight of the second layer, we will find that there are some problems A lot of repeated calculation steps, like the example of simple function above, so, in order to avoid this kind of consumption, we use to find the partial derivative from the back to the front, find out the function value of each unit, find out the partial derivative of the corresponding unit, save it, multiply it all the time, and input the layer.

In fact, it is the sum of squares of all the weights. Generally, the one multiplied by the offset term will not be put in. This term is very simple. Ignore it for the time being, and do not write this term for the time being (this is regularization).

So, remember what I said earlier, we will seek the derivative from top to bottom and save the partial derivative of the current multiple subunits. According to the above formula, we know that the partial derivative of the second weight matrix can be obtained by

From the formula on the line, we can see that the derivative we saved can be directly multiplied. If there is a multi-layer network, in fact, the following process is the same as this one, so we get the formula: if there is a multi-layer network, the following process is the same as this one

Because this network is three layers, so we can get all the partial derivatives. If it is multi-layer, the principle is the same. Multiply it continuously. Starting from the second formula, the following forms are the same.

When x approaches negative infinity, y approaches 0; when x approaches positive infinity, y approaches 1; when x = 0, y = 0.5. Of course, when x goes beyond the range of [- 6,6], the value of the function basically does not change, and the value is very close, so it is generally not considered in the application.

Although sigmoid function has good properties, it can be used in classification problems, such as the classifier of logistic regression model. But why choose this function? In addition to the above mathematical easier to deal with, there are its own derivation characteristics. For the classification problem, especially for the binary classification problem, it is assumed that the distribution obeys Bernoulli distribution. The PMF of Bernoulli distribution is: 0

First, the cause of the problem Defines a variable outside a function, and then use the following variables inside a function in python, and change its value, the result error local variable a referenced before the assignment, the code is as follows: error reason: the python functions and global variables with the same, if you have to modify the value of the variable will become a local variable, the reference to the variable nature can appear not to define such a mistake. Second, the solution The problem is solved by declaring a as a global variable with the global keyword:

2. Parameters: first and last First and last are bidirectional iterators, and the scope of reverse function inversion is [first,last], so the elements pointed to by first are included, but the elements pointed to by last are not. 3. Return value The reverse function returns no value. Example 4,

1. Plug-in description: Plug-in 8 requires two parameters. One represents a string that requires spell checking, and the other represents how text should be displayed after spell checking. They are: The \$text string variable, which represents the text that requires spell checking. The \$Action string variable, which is a letter representing the format in which the text is displayed. dictionary.txt:

Grammatical definitions: Definition: > Mysqli_connect_error () : The function returns an error description of the last connection error. > Mysqli_connect_errno () : The function returns the error code for the last connection error.

Usually caused by an unequal number of elements. For example: when a tuple is assigned to a tuple variable, the result is that there are not enough elements in a tuple. As shown above. Another example: the dictionary items() function returns a traversable array of (keys, values) tuples in a list.

Related Equipments