Category Archives: Technology

Connect Firebase with Flutter for Android and iOS Apps


Overview


Firebase is simple but great backend for flutter apps. Firebase is now a product of Google which comes with countless of features such as authentication, firestore and real time database. Dart and flutter framework is also from Google hence it has official support for Firebase with FluterFire set of libraries.

The initial setup of flutter app integration with Firebase is simple and seamless. In this tutorial we will learn how to integrate Firebase backend with a flutter app.

So let’s begin…

To begin we need to create a flutter app.  To create a flutter app simply enter the following command in VS code terminal


#flutter create <appName>

flutter create flutterApp 

Now it is time to do the configuration for Firebase and connect the app with Firebase backend.


Step1: Registration app


Go to Firebase console.

https://firebase.google.com/


In Firebase dashboard, select Create new project and give a name for your project.

Add project name

Firebase asks for analytics. For this project we will not enable Google analytics. Now select Create Project


Disable Google analytics and continue

Firebase will continue the process in the background and make the project ready for us.


Firebase will run background to make our project ready

Click Continue and then go back to Firebase dashboard. Select any app either iOS or Android.  In this case, we will start with android app and will go for iOS later.


Firebase dashboard

Now go to your project in VS code. Navigate to android/app/build.gradle. You can see the applicationId similar as com.example.firebaselogin.

You can also find the name in AndroidManifest.xml file in android folder.

Android package name and the applicationId must be same.  Simply add same name for your project. It is very important. Remember that! We will leave app nickname as blank for simplicity then Register app.


Register app

Step 2: Downloading config file


Now download config file and then store it in the flutter app. The location is important as it has API keys and other important information for Firebase to use. Location: ~flutter app/android/app folder.


Download json file

Now it is time to add Firebase SDK. Modify build.gradle file to add classpath.  Remember to open project-level build gradle (<project>/build.gradle)


Add dependencies

Step 3: Modify Gradle file


Add all plugins and click Next

Now we have successfully implemented Google service plugin in our project. We need to uninstall the app and build it again to run the app on a simulator or on a mobile device.


Go to console

Ok! Now it’s time to add iOS app


iOS has similar step. Open the iOS project in Xcode at ios/Runner/Runner.xcodeproj and copy the Bundle identifier under General:


Runner file

Select iOS platform from Firebase dashboard.  You will see a similar screen where we add an iOS Bundle ID. By default both android and iOS bundle name will be same. It is better to keep same name for consistency: leave all the optional fields blank and click Register app to move next step.


Register iOS app

Download GoogleService-Info.plist and drag this file into the root of your Xcode project within Runner: Again remember path is important.


Download plist file

It’s important to use Xcode to put the GoogleService-info.plist file, as this will not work otherwise.


Use Xcode to move plist file

Click Next

Click Next

Click Continue to console

Conclusion


We’ve learned how to hook a flutter application with Firebase backend. We have created both android and iOS app on Firebase backend and then configured to connect with our application by downloading GoogleService-Info.plist file.

Learn Everything About Feature Scaling


Feature Scaling?


Feature scaling is a technique used when we create a machine learning model. It lets you to normalize the range of independent variables or features of the given field of the dataset. It is also known as data normalization. During data preprocessing phase, it is important to do data normalization because, machine learning algorithm will not perform well if the data attributes have different scales.

let’s scratch the surface…

Why Feature Scaling is Important?


The importance of feature scaling is can be illustrated by the following simple example.

Suppose in a dataset we have features and  each feature has different records.


featuresf1f2f3f4f5
Magnitude3004001520550
UnitKgKgcmcmg

Remember every feature has two components


  • Magnitude  (Example: 300)
  • Unit (Example: Kg)

Always keep in mind: Most of the ML algorithms work based on Euclidean distance, Manhattan distance or K Nearest-Neighbors and few others.


featuresf1f2f3f4f5 (f2- f1) (f4- f3)
Magnitude3004001520550400-300 = 10020-15=5
UnitKgKgcmcmgKgcm

So coming back to this example, so when we try to find out the distance between different features, the gap between them actually varies. Some attributes have large gap in between while others are very close to each other. See the table:

You may also have noticed, unit of f5 is in gram(g) while f1 and f2 are in Kilo gram (Kg). So in this case, the model may consider the value of f5 is greater than f1 and f2 but that’s not the case. Because of these reasons, the model may give a wrong predictions.

Therefore we need to make all the attributes (f1, f2, f3…) to have same scale with respect to its units.  In short, we need to convert all the data into same range (usually between 0-1)  such that no particular feature gets dominant over another or no particular feature has less dominant. (By doing so, the convergence will be also much fast and efficient).

There are two common methods used to get all attribute into same scale.


Min-max Scaling


In min-max scaling, values are rescaled to a range between 0 to 1. To find the new value,  we need to subtracting the min value and then divide by the max minus the min. Scikit-Learn provides MinMaxScaler for this calculation.



    \[X_{new} = \frac{ Xi-min(X)}{max(X)-min(X)}\]

Standardization

Standardization is much less affective by outliers (explain outliers – link) . First we need subtract the mean value then divide by standard deviation such that it forms resulting distribution of unit variance. Scikit-Learn provides  a transformer called StandardScaler for this calculation.



    \[ X_{new} = \frac{Xi-X_{mean}}{Standard Deviation} \]


Here I show an example for feature scaling using min-max scaling and standardization. I’m using google colab but you can use any notebook/Ide such as Jupyter notebook or PyCharm.


Go to the link and download Data_for_Feature_Scaling.csv


Upload csv to the google drive

Mount drive to the working notebook

For that you may need authorization code from google Run the code.


# feature scaling sample code
# import recommended libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing
# mount drive
from google.colab import drive
drive.mount('/content/drive')
# import dataset 
data_set = pd.read_csv('feature_scaling/Data_for_Feature_Scaling.csv')
# check the data 
data_set.head()

Output
        Country	 Age	Salary	Purchased
0	France	 44	72000	0
1	Spain	 27	48000	1
2	Germany	 30	23000	0
3	Spain	 38	51000	0
4	Germany	 40	1000	1

x = data_set.iloc[:, 1:3].values
print('Origianl data values: \n', x)

Output
Original data values: 
 [[  44   72000]
 [   27   48000]
 [   30   23000]
 [   38   51000]
 [   40    1000]
 [   35   49000]
 [   78   23000]
 [   48   89400]
 [   50   78000]
 [   37   9000]]

from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
# Scaled feature 
x_after_min_max_scaler = min_max_scaler.fit_transform(x)
print('\n After min max scaling\n', x_after_min_max_scaler)

Output
After min max scaling
 [[0.33333333  0.80316742]
 [0.           0.53167421]
 [0.05882353   0.24886878]
 [0.21568627   0.56561086]
 [0.25490196   0.        ]
 [0.15686275   0.54298643]
 [1.           0.24886878]
 [0.41176471   1.        ]
 [0.45098039   0.87104072]
 [0.19607843   0.09049774]]

# Now use Standardisation method
Standardisation = preprocessing.StandardScaler()
x_after_Standardisation = Standardisation.fit_transform(x)
print('\n After Standardisation: \n', x_after_Standardisation)

Output
After Standardisation: 
 [[ 0.09536935  0.97512896]
 [-1.15176827   0.12903008]
 [-0.93168516  -0.75232292]
 [-0.34479687   0.23479244]
 [-0.1980748   -1.52791356]
 [-0.56487998   0.1642842 ]
 [ 2.58964459  -0.75232292]
 [ 0.38881349   1.58855065]
 [ 0.53553557   1.18665368]
 [-0.41815791  -1.2458806 ]]

Learning resources:

Import CSV file to Firestore using GCP

Overview

If you have large records of data, entering each record into cloud Firestore database manually is time consuming. Not to mention that it is going to be a tedious as well. In this tutorial you will learn an easy method to import large CSV file to Google Cloud Firestore using GCP and Node.js. Therefore you need Google account and valid CSV format file.  Before that, let’s know what GCP and Cloud Firestore is all about.

let’s start…

GCP

Google provide seamless cloud computing services to various clients across the world using Google Cloud Platform (GCP). They provide series of modular services such as hosting services, data storage, data analytics and Machine Learning (ML) and many more which use Google Hardware infrastructure to run.

Cloud Firestore

When you look for a fast NoSQL document database, then there is nothing better than Cloud Firestore provided by Google. It is serverless, meaning it simplifies data storing, syncing, and data querying for developers. It also supports real time synchronization and offline support.  Security is another feature where developers spend less time on security of the application, therefore time requires to develop a mobile application is significantly reduced.

GCP setup

Create new project in Google Cloud Platform (GCP). In this case concilsVote (you can give a name as your wish). This is how the main menu looks like. Click Firestore and then Data.

Then create new project by clicking NEW PROJECT or CREATE PROJECT as shown. If you have existing project you may search the project and then use it.

Fill the form and enter CREATE. (You need to enter suitable project name and Location).

Now, select the project you just created from the drop down and then click OPEN.

You will see the following screen. Choose SELECT NATIVE MODE from the options given.

Then choose, the nearest location where to store the data and CREATE DATABASE. Please select the location carefully because we are unable to change the location once it’s being created.

You will see process of database creation process running in the background.

Now you will see a screen like this. It says Your database is ready to go. Just add data. It is time to Activate Cloud Shell.

Enter the following command below to check whether the project is configured or not.

gcloud config list project

Now set the project ID using the below command. You may see the project ID from the list.

gcloud config set project PROEJCTID
gcloud config set project concilsVote

Now write some logic to read CSV data and import it to Firestore.

Create a new directory named concilsVote in the terminal and then change the directory.

mkdir concilsVote
cd concilsVote

Initialize npm using the below command in the terminal and fill the details.

npm init

Press Yes, and enter for all the options, until package.json file is get created with the details below.

{
"name": "rainfallcsvexport",
"version": "1.0.0",
"description": "Rainfall Data conversion",
"main": "index.js",
"scripts": { "test": "echo "Error: no test specified" && exit 1" },
"author": "Your name",
"license": "ISC"
}

Now you need to install the following dependencies. Just run the following npm commands.

npm install @google-cloud/firestore
npm install csv-parse

Create a new file named concilVote.js using the command in the terminal (Any name is fine). Copy the below code and paste in the file.

const {readFile}  = require('fs').promises;
const {promisify} = require('util');
const parse       = promisify(require('csv-parse'));
const {Firestore} = require('@google-cloud/firestore');
if (process.argv.length < 3) {
  console.error('Please include a path to a csv file');
  process.exit(1);
}
const db = new Firestore();
function writeToFirestore(records) {
  const batchCommits = [];
  let batch = db.batch();
  records.forEach((record, i) => {
    var docRef = db.collection('rainfall').doc(record.SUBDIVISION);
    batch.set(docRef, record);
    if ((i + 1) % 500 === 0) {
      console.log(`Writing record ${i + 1}`);
      batchCommits.push(batch.commit());
      batch = db.batch();
    }
  });
  batchCommits.push(batch.commit());
  return Promise.all(batchCommits);
}
async function importCsv(csvFileName) {
  const fileContents = await readFile(csvFileName, 'utf8');
  const records = await parse(fileContents, { columns: true });
  try {
    await writeToFirestore(records);
  }
  catch (e) {
    console.error(e);
    process.exit(1);
  }
  console.log(`Wrote ${records.length} records`);
}
importCsv(process.argv[2]).catch(e => console.error(e));

Now we will upload the CSV file. Upload the CSV file to the concilVote folder by right-clicking the folder name.

To import data to Firestore simply run the following command.

node concilVote.js concilVote.csv

Click Enter, and now you are able to see the data getting imported. If everything goes well, you will see a message like below, in the console. Now just refresh the Firestore console page and you will see the data.

Wrote 5302 records

Congratulations