Logistic Regression with Spark : Learn Data Science

L
Logistic regression with Spark is achieved using MLlib. Logistic regression returns binary class labels that is "0" or "1". In this example, we consider a data set that consists only one variable "study hours" and class label is whether the student passed (1) or not passed (0). from pyspark import SparkContext from pyspark import SparkContext import numpy as np from numpy import array from pyspark.mllib.regression import LabeledPoint from pyspark.mllib.classification import LogisticRegressionWithLBFGS sc = SparkContext () def createLabeledPoints(label, points): return LabeledPoint(label, points) studyHours = [ [ 0, [0.5]], [ 0, [0.75]], [ 0, [1.0]], [ 0, [1.25]], [ 0, [1.5]], [
Subscribe or log in to read the rest of this content.

About the author

Devji Chhanga

I teach computer science at university of Kutch since 2011, Kutch is the western most district of India. At iDevji, I share tech stories that excite me. You will love reading the blog if you too believe in the disruptive power of technology. Some stories are purely technical while others can involve empathetical approach to problem solving using technology.

Add Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Devji Chhanga

I teach computer science at university of Kutch since 2011, Kutch is the western most district of India. At iDevji, I share tech stories that excite me. You will love reading the blog if you too believe in the disruptive power of technology. Some stories are purely technical while others can involve empathetical approach to problem solving using technology.

Get in touch

Quickly communicate covalent niche markets for maintainable sources. Collaboratively harness resource sucking experiences whereas cost effective meta-services.