Logistic Regression Program Remover

 Posted admin
  • The data set pred created by the OUTPUT statement is displayed in Output 76.1.8.It contains all the variables in the input data set, the variable phat for the (cumulative) predicted probability, the variables lcl and ucl for the lower and upper confidence limits for the probability, and four other variables (IP1, IP0, XP1, and XP0) for the PREDPROBS= option.
  • May 09, 2017  This article discusses the basics of Logistic Regression and its implementation in Python. Logistic regression is basically a supervised classification algorithm. In a classification problem, the target variable(or output), y, can take only discrete values for given set of features(or inputs), X.
  1. Logistic Regression In Spss

Multinomial Logistic Regression model is a simple extension of the binomial logistic regression model, which you use when the exploratory variable has more than two nominal (unordered) categories. In multinomial logistic regression, the exploratory variable is dummy coded into multiple 1/0 variables.

Pre-requisite:Linear Regression
This article discusses the basics of Logistic Regression and its implementation in Python. Logistic regression is basically a supervised classification algorithm. In a classification problem, the target variable(or output), y, can take only discrete values for given set of features(or inputs), X.

Contrary to popular belief, logistic regression IS a regression model. The model builds a regression model to predict the probability that a given data entry belongs to the category numbered as “1”. Just like Linear regression assumes that the data follows a linear function, Logistic regression models the data using the sigmoid function.

Logistic regression becomes a classification technique only when a decision threshold is brought into the picture. The setting of the threshold value is a very important aspect of Logistic regression and is dependent on the classification problem itself.

The decision for the value of the threshold value is majorly affected by the values of precision and recall. Ideally, we want both precision and recall to be 1, but this seldom is the case. In case of a Precision-Recall tradeoff we use the following arguments to decide upon the thresold:-

1. Low Precision/High Recall: In applications where we want to reduce the number of false negatives without necessarily reducing the number false positives, we choose a decision value which has a low value of Precision or high value of Recall. For example, in a cancer diagnosis application, we do not want any affected patient to be classified as not affected without giving much heed to if the patient is being wrongfully diagnosed with cancer. This is because, the absence of cancer can be detected by further medical diseases but the presence of the disease cannot be detected in an already rejected candidate.

2. High Precision/Low Recall: In applications where we want to reduce the number of false positives without necessarily reducing the number false negatives, we choose a decision value which has a high value of Precision or low value of Recall. For example, if we are classifying customers whether they will react positively or negatively to a personalised advertisement, we want to be absolutely sure that the customer will react positively to the advertisemnt because otherwise, a negative reaction can cause a loss potential sales from the customer.

Based on the number of categories, Logistic regression can be classified as:

  1. binomial: target variable can have only 2 possible types: “0” or “1” which may represent “win” vs “loss”, “pass” vs “fail”, “dead” vs “alive”, etc.
  2. multinomial: target variable can have 3 or more possible types which are not ordered(i.e. types have no quantitative significance) like “disease A” vs “disease B” vs “disease C”.
  3. ordinal: it deals with target variables with ordered categories. For example, a test score can be categorized as:“very poor”, “poor”, “good”, “very good”. Here, each category can be given a score like 0, 1, 2, 3.

First of all, we explore the simplest form of Logistic Regression, i.e Binomial Logistic Regression.

Logistic Regression In Spss

Binomial Logistic Regression

Consider an example dataset which maps the number of hours of study with the result of an exam. The result can take only two values, namely passed(1) or failed(0):

So, we have
i.e. y is a categorical target variable which can take only two possible type:“0” or “1”.
In order to generalize our model, we assume that:

  • The dataset has ‘p’ feature variables and ‘n’ observations.
  • The feature matrix is represented as:
    Here, denotes the values of feature for observation.
    Here, we are keeping the convention of letting = 1. (Keep reading, you will understand the logic in a few moments).
  • The observation, , can be represented as:
  • represents the predicted response for observation, i.e. . The formula we use for calculating is called hypothesis.

If you have gone though Linear Regression, you should recall that in Linear Regression, the hypothesis we used for prediction was:
where, are the regression coefficients.
Let regression coefficient matrix/vector, be:
Then, in a more compact form,

The reason for taking = 1 is pretty clear now.
We needed to do a matrix product, but there was no
actual multiplied to in original hypothesis formula. So, we defined = 1.

Now, if we try to apply Linear Regression on above problem, we are likely to get continuous values using the hypothesis we discussed above. Also, it does not make sense for to take values larger that 1 or smaller than 0.
So, some modifications are made to the hypothesis for classification:
where,
is called logistic function or the sigmoid function.
Here is a plot showing g(z):
We can infer from above graph that:

  • g(z) tends towards 1 as
  • g(z) tends towards 0 as
  • g(z) is always bounded between 0 and 1

So, now, we can define conditional probabilities for 2 labels(0 and 1) for observation as:
We can write it more compactly as:
Now, we define another term, likelihood of parameters as:

Likelihood is nothing but the probability of data(training examples), given a model and specific parameter values(here, ). It measures the support provided by the data for each possible value of the . We obtain it by multiplying all for given .

And for easier calculations, we take log likelihood:
The cost function for logistic regression is proportional to inverse of likelihood of parameters. Hence, we can obtain an expression for cost function, J using log likelihood equation as:
and our aim is to estimate so that cost function is minimized !!

Using Gradient descent algorithm

Firstly, we take partial derivatives of w.r.t each to derive the stochastic gradient descent rule(we present only the final derived value here):
Here, y and h(x) represent the response vector and predicted response vector(respectively). Also, is the vector representing the observation values for feature.
Now, in order to get min ,
where is called learning rate and needs to be set explicitly.
Let us see the python implementation of above technique on a sample dataset (download it from here):

2.252.502.753.003.253.503.754.004.254.504.755.005.50
importnumpy as np
defloadCSV(filename):
function to load dataset
with open(filename,'r') as csvfile:
dataset =list(lines)
dataset[i] =[float(x) forx indataset[i]]
defnormalize(X):
function to normalize feature matrix, X
mins =np.min(X, axis =0)
rng =maxs -mins
returnnorm_X
''
''
deflog_gradient(beta, X, y):
logistic gradient function
first_calc =logistic_func(beta, X) -y.reshape(X.shape[0], -1)
returnfinal_calc
''
''
y =np.squeeze(y)
step2 =(1-y) *np.log(1-log_func_v)
returnnp.mean(final)
defgrad_desc(X, y, beta, lr=.01, converge_change=.001):
gradient descent function
cost =cost_func(beta, X, y)
num_iter =1
while(change_cost > converge_change):
beta =beta -(lr *log_gradient(beta, X, y))
change_cost =old_cost -cost
defpred_values(beta, X):
function to predict labels
pred_prob =logistic_func(beta, X)
returnnp.squeeze(pred_value)
''
''
x_0 =X[np.where(y ==0.0)]
plt.scatter([x_0[:, 1]], [x_0[:, 2]], c='b', label='y = 0')
plt.scatter([x_1[:, 1]], [x_1[:, 2]], c='r', label='y = 1')
# plotting decision boundary
x2 =-(beta[0,0] +beta[0,1]*x1)/beta[0,2]
plt.ylabel('x2')
plt.show()
if__name__ =='__main__':
dataset =loadCSV('dataset1.csv')
# normalizing feature matrix
# stacking columns wth all ones in feature matrix
X =np.hstack((np.matrix(np.ones(X.shape[0])).T, X))
# response vector
beta =np.matrix(np.zeros(X.shape[1]))
# beta values after running gradient descent
print('Estimated regression coefficients:', beta)
y_pred =pred_values(beta, X)
# number of correctly predicted labels
print('Correctly predicted labels:', np.sum(y ==y_pred))
# plotting regression line


Note: Gradient descent is one of the many way to estimate .
Basically, these are more advanced algorithms which can be easily run in Python once you have defined your cost function and your gradients. These algorithms are:

  • BFGS(Broyden–Fletcher–Goldfarb–Shanno algorithm)
  • L-BFGS(Like BFGS but uses limited memory)
  • Conjugate Gradient

Advantages/disadvantages of using any one of these algorithms over Gradient descent:

  • Advantages
    • Don’t need to pick learning rate
    • Often run faster (not always the case)
    • Can numerically approximate gradient for you (doesn’t always work out well)
  • Disadvantages
    • More complex
    • More of a black box unless you learn the specifics
Logistic

Multinomial Logistic Regression

In Multinomial Logistic Regression, the output variable can have more than two possible discrete outputs. Consider the Digit Dataset. Here, the output variable is the digit value which can take values out of (0, 12, 3, 4, 5, 6, 7, 8, 9).
Given below is the implementation of Multinomial Logisitc Regression using scikit-learn to make predictions on digit dataset.

fromsklearn importdatasets, linear_model, metrics
# load the digit dataset
# defining feature matrix(X) and response vector(y)
y =digits.target
# splitting X and y into training and testing sets
fromsklearn.model_selection importtrain_test_split
X_train, X_test, y_train, y_test =train_test_split(X, y, test_size=0.4,
reg =linear_model.LogisticRegression()
# train the model using the training sets
y_pred =reg.predict(X_test)
# comparing actual response values (y_test) with predicted response values (y_pred)
print('Logistic Regression model accuracy(in %):',

At last, here are some points about Logistic regression to ponder upon:

Logistic Regression Program Remover
  • Does NOT assume a linear relationship between the dependent variable and the independent variables, but it does assume linear relationship between the logit of the explanatory variables and the response.
  • Independent variables can be even the power terms or some other nonlinear transformations of the original independent variables.
  • The dependent variable does NOT need to be normally distributed, but it typically assumes a distribution from an exponential family (e.g. binomial, Poisson, multinomial, normal,…); binary logistic regression assume binomial distribution of the response.
  • The homogeneity of variance does NOT need to be satisfied.
  • Errors need to be independent but NOT normally distributed.
  • It uses maximum likelihood estimation (MLE) rather than ordinary least squares (OLS) to estimate the parameters, and thus relies on large-sample approximations.

References:

This article is contributed by Nikhil Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.


Recommended Posts:


Active5 years, 4 months ago

I would like to perform a logistic regression on all variables but two in a large data frame.How can i ask r to refer to all variable except those two without creating a new data frame.for example:

I have this line of code :

and i want to take out 'female' and 'apcalc'. Can I do it in this single line of code?

We’re dedicated to reader privacy. Free download suara kenari gacor mp3 We never accept ads. But we still need to pay for servers and staff.

Rich Scriven
79.6k8 gold badges117 silver badges186 bronze badges
mql4beginnermql4beginner
1,0513 gold badges20 silver badges51 bronze badges

2 Answers

You could modify the model statement to just include the variables you want. I think all three lines below return the same estimates:

Mark MillerMark Miller
7,03320 gold badges59 silver badges102 bronze badges
Logistic

EDIT

If you want to remove those columns for analysis, then either subset the data before running the model, or inside the glm call. Keep in mind the latter will slow the call to gml for larger data sets.

ORIGINAL ANSWER

If you want to extract only the coefficients for female and apcalc, then

Rich ScrivenRich Scriven
79.6k8 gold badges117 silver badges186 bronze badges

Not the answer you're looking for? Browse other questions tagged r or ask your own question.