sandeepk

Design by Contract is the software designing process that takes a contract based approach in developing the software that does no more and no less than it claims to do.

What is Contract? A contract is a document that defines the rights and responsibilities of both the party involved in the agreement and which list the repercussions if either party fails to abide by the contract. We all have seen or been in contact with your Landlord or the employment contract that specifies the roles and responsibilities that you must fulfill.

A Similar process we follow while developing the software, where we focus on documenting (and agreeing to) the rights and responsibilities of software modules to ensure program correctness. While writing the contract these questions will be helpful to get things clear.

  • What does the contract expect?
  • What does the contract guarantee?
  • What does the contract maintain?

The software which follows the Design by Contract has these condition specified

  • Preconditions, are the condition which must be true before calling the routine, if violated routine should never be called.
  • Postconditions, after the routines are finished the state which needs to be achieved should be achieved.
  • Class Invariant, A class ensures that this condition is always true from the perspective of a caller. During internal processing of a routine.

Why Design by Contract can be a good approach ?

  • DBC doesn’t require any setup or mocking
  • DBC we can define both the success and failure cases.
  • DBC can be used during the design phase, development, and deployment phase.
  • DBC fits in nicely with our concept of crashing early.

Conclusion Most of you are thinking, do we need another development approach as we already have Test-driven development (TDD). DBC and TDD are different approaches to the broader topic of the software development process. They both have value and both have used in different situations. DBC approach can be used across the Design, Development, and Deployment. DBC fits perfectly in the world where we follow the concept of crashing early. So give it a shot, will be happy to discuss it, you can reach me out her

Cheers!

#100DaysToOffload #SoftwareDevelopment #DBC #DesignByContract

Django Q() object helps to define SQL condition on the database and can be combined with the &(AND) and |(or) operator. Q() helps in the flexibility of defining and reusing the conditions.

  • Using Q() objects to make an AND conditions.
  • Using Q() objects to make an OR conditions.
  • Using Q() objects to make reusable conditions.

Using Q() objects to make an AND conditions We can use Q() objects to combine multiple filter conditions into one condition as filter conditions always perform AND operations.

from django.db.models import Q

# Without Q() object
document_obj = Document.objects.filter(created_by=1282).filter(doc_type='purchase_order').filter(edit=0).filter(cancelled=0)

#With Q() object
q_filter_document = Q(created_by=1282) & Q(doc_type='purchase_order') & Q(cancelled=0) &(edit=0)

# can also be written as
q_filter_document_another_way = Q(created_by=1282, doc_type='purchase_order', cancelled=0, edit=0)

document_obj = Document.objects.filter(q_filter_document)

Using Q() objects to make an OR conditions

from django.db.models import Q


#With Q() object
q_filter_document = Q(created_by=1282) | Q(created_by=1282)
document_obj = Document.objects.filter(q_filter_document)

Q() to make reusable filter condition The best use of Q() objects is reusability, we define the Q() once and can use them to combine with different Q() objects with help of &, |, and ~ operators.

Let's consider a use case, in which the user can generate a report based on certain filters. User can filter report based on these values documenttype, isdraft, createdby, documentstatus


def get_document_object(document_type, is_draft, created_by, document_status):
    base_query = Q(active=1, cancelled=0, document_tye=document_type, is_draft=is_draft)

    # based on condition we can different Q() objects to filter the tables
    if document_status = 'in_progress':
        base_query = base_query & Q(document_status=document_status, completed=0)
    else if document_status = 'completed':
        base_query = base_query & Q(document_status=document_status, completed=1)


   return Documents.objects.filter(base_query)

In Q() objects we can use the same conditional operator which we use in filter objects like in operator, startswith, endswith, etc.

Conclusion Q() objects contribute to clean code and reusability. It helps to define the condition with &, |, and ~ relation operator to simplify the complex queries.

Cheers!

#django #python #100DaysToOffload

In the previous blog post, we have discussed F() Expression, we will now explore more query expression in Django, to name few that we will discuss in this post are

  • Func() Expression
  • Subquery Expression
  • Aggregation () Expression

Func() Expression Func () Expression is the base of all the expressions and can be used to create your custom expression for the database level function.

# The table that we using for our query is the *Student* which keeps records of the students for the whole school.

from django.db.models import F, Func
student_obj = Student.objects.annotate(full_name=Func(F('first_name') + F('last_name'), function='UPPER')

# This will give a student object with a new field that is *full_name* of the student in upper case.

Subquery Expression Subquery are like nested condition in the query filter which helps you to make a complex query into a clean concise query. But you need to know the order of the sequence the query will be executed to use effectively. While using a Subquery you will also need to know about the OuterRef, which is like an F() Expression but points to the parent query value, let see both Subquery and OuterRef in action

# you are given a task to get the name of the student whose name starts with *S* and whose fees are due.

from django.db.models import OuterRef, Subquery
fee_objects = Fees.objects.filter(payment_due_gt=0)
student_obj = Student.objects.filter(name__startswith='S').filter(id__in=Subquery(fee_objects.values('student_id')))

# Get the lastest remarks for the students
remark = Remark.objects.filter(student_id=OuterRef('pk')).order_by('-created_at')
student_obj = Student.objects.annotate(newest_remark=Subquery(remark.values('remark_strl')[:1]))

Aggregation () Expression

Aggregation Expression is the Func Expression with GroupBy clause in the query filter.

# get the total student enrolled in the *Blind Faith* subject.

student_obj = Student.objects.filter(subject_name='blind_faith').annotate(total_count=Count('id'))

Note: All queries mentioned above in the code are not tested. So if you see any typo, a query that does not make sense, feel free to reach out to me at sandeepchoudhary1507[at]gmail[DOT]com.

Cheers!

#100DaysToOffload #django #python

While working from home one of the issue I faced that my laptop battery charger adapter is always remain plugged in almost all the time, due to which I have to replace my laptop battery. To deal with the problem I have now written a script to notify me about the laptop battery charging level if it goes above 85 % and below 20%.

#! /bin/bash                                                                                                                                          

while true
do

    battery_level=`acpi -b | grep -o '[0-9]*' | sed -n  2p`
    ac_power=`cat /sys/class/power_supply/AC/online`

    #If above command raise an error " No such file or directory" try the below command.
    #ac_power=`cat /sys/class/power_supply/ACAD/online`
    if [ $ac_power -gt 0 ]; then
        if [ $battery_level -ge 85 ]; then
            notify-send "Battery Full" "Level: ${battery_level}%"
        fi
    else
        if [ $battery_level -le 20 ]; then
            notify-send --urgency=CRITICAL "Battery Low" "Level: ${battery_level}%"
        fi
    fi
    sleep 120

done

The important commands which I want to break down to explain actually what they are doing.

  • First is acpi which tell us about the battery information and other ACPI information
  • grep command is used to extract the integer value from the acpi command
  • sed is used to get the second value from the grep result.
>> acpi -b
Battery 0: Discharging, 54%, 02:03:37 remaining
>>acpi -b | grep -o '[0-9]*'
0
54
02
04
41
>>acpi -b | grep -o '[0-9]*' | sed -n  2p
54
  • After that, we check that the charger is plugged in or not, based on that we check that the battery level does not exceed the described limit, if so is the case send the notification.
  • Then we check for the battery low indication which sends the notification if the battery level less than 20 %.
  • These condition is put in a continuous loop to check after a sleep time of 120s.

To make this script run automatically you have to assign the execution permission and specifies the execution command in the ~/.profile and reboot the system.

>> sudo chmod +x /path/to/battery-notification.sh

you can find my notes on shell commands here

thanks shrini for pointing out issue for Ubuntu 20 in lineac_power=`cat /sys/class/power_supply/AC/online` :) Cheers!

#100DaysToOffload #automation #scripts

What is the F() Expression? First let me explain to you what are Query Expressions are, these expressions let you use value or computation to be used in the update, create and filters, order by, annotation, aggregation. F() object represent the value of the model fields or annotated columns. It lets you help to not load the value of the field into the python memory rather directly handles in the Database query.

How to use the F() Expression? To use the F expression you have to import them from the from django.db.models import F and have to pass the name of the field or annotated column as argument, and it will return the value of the field from the database, without letting know the python any value. Let some example.

from django.db.models import F

# Documents is the table which have the details of the document submitted by user from the registrey portal for GYM membership

# We need update the count of the document submitted by the user with pk=10091

# without using F Expression

document = Documents.objects.get(user_id=10091)
document.document_counts += 1
document.save()

# Using F expression
document = Documents.objects.get(user_id=10091)
document.document_counts = F('document_counts') + 1
document.save()

Benefits of the F() Expression.

  • With the help of F expression we can make are query clean and concise.
    from django.db.models import F
  document = Documents.objects.get(user_id=10091)
  document.update(document_counts=F('document_counts') + 1)

   #Here we also have achieved some performance advantage
    #1. All the work is done at database level, rather than throwing the value from the database in the python memory to do the computation.
    #2. Save queries hit on the database.
  • F Expression can save you from the race condition. Consider a scenario where multiple user access your database and when bother user access the Document object for the user 10091, the count value is two, when user updates the value and save it and other user does the same the value will be saved as three not Four because when both user fetches the value its two.

  # user A fetch the document object, and value of document_counts is two.
  document = Documents.objects.get(user_id=10091)
  document.document_counts += 1
  document.save()
  # after the operation value of document_counts is three

  # Code running prallerly, User B also fetch the object, and value of document_counts is two.
  document = Documents.objects.get(user_id=10091)
  document.document_counts  += 1
  document.save()
  # after the operation value of document_counts is three

  # But actually value should be Four, but this is not the case using F expression here will save use from this race condition.
  • F Expression are persistent, which means the query persist after the save operation, so you have to use the refreshfromdb to avoid the persistence.
  document = Documents.objects.get(user_id=10091)
  document.document_counts = F('document_counts') + 1
  document.save()

  document.document_validation = 0
  document.save()

  # This will increase the value of *document_counts* by two rather then one as the query presists and calling save will trigger the increment again.

  • More example of F Expression in action with filter, annotate query.
from django.db.models import F

# annotation example
annotate_document = Document.objects.filter(created_by=F('created_by_first_name') + F('created_by_last_name')


# filter example
filter_document = Documents.objects.filter(total_documents_count__gt=F('valid_documents_count'))

That's all about the F Expression, it is pretty handy and helps you improve the performance, by reducing value loading in the memory and query hit optimization, but you have to keep an eye over the persistent issue, which I assume will not be the case if you are writing your code in structured way.

Cheers!

#100DaysToOffload #django #python

Journaling is a great way to keep track of your progress and emotional state, using the same journaling principle in managing your finance can be of great help to see how money flow in & out of your life :).

So, here we will explore how to journal in Emacs or any editor of your choice with the help of the tool ledger.

Before starting, let's get familiar with the basic terminologies.

  • Assets – It's the money that you have.
  • Liabilities – It's the money that you own, or you can say Debt.

Ledger is the double-entry accounting software, which means that you have to mention the flow in of the money[where from it comes like Saving account, Credit Card] and flows out of the money[expenditure, Investment, shopping] and all these entries should balance out each other and result should be zero if it's not the case there is an issue in your entries. The best part of the ledger software is that it allows you to manage your data in a simple text file and don't alter your data, and you have all your data with you.

So, Ledger read the simple text file and generate all kinds of the report that you need. Emacs comes in the picture to manage these text file and give a solid way to manage these file with org-mode also, that we will discuss some other time.

Now get ready to set up the journal system.

Installing the Tools

  • Download the Ledger on your system based on your OS from here — for the lazy ones on Ubuntu OS, you can follow the steps given below.
$ sudo add-apt-repository ppa:mbudde/ledger
$ sudo apt-get update
$ sudo apt-get install ledger
  • Open your Emacs editor and then follow these steps.

    • Press Alt-X package-install [Enter Key]
    • Type ledger-mode [Enter Key], this will install the ledger-mode package
    • Open your Emacs config file and paste this snippet. We are telling the ledger-mode to activate for the file extension of .dat file.

      (use-package ledger-mode
           :ensure t
           :init
           (setq ledger-clear-whole-transactions 1)
      
           :mode "\\.dat\\'")
      

Ledger Mode in Action

  • Create a file with the extension .dat and open it in Emacs.
  • Press Ctrl-C Ctrl-A to enter an entry in the file.
    • This will ask for the date for the entry, afterward press enter.
    • Give a nice heading to your Ledger entry and add your expense.

Ledger entry example – It's up to you how you want to maintain your journal, Some entries example to sort out your expenses.

Ledger entry example – You can also plan your budget in the ledger and can automate the transaction, if you are geeky enough you can write a code to read the spreadsheet shared by your bank to populate the ledger. – Ctrl-C Ctrl-O Ctrl-R for report generation, you can find more about reports here

#100DaysToOffload #financial-freedom #emacs

—date: 2019-07-08 originally posted here

Identity & Access Management let the user manage access control/policies to the resources by defining who(identity) and what they can access(roles). Today we will talk about the Google Cloud Identity & Access Management and understand what it is and How to use it.

Policy in IAM is composed of the binding list which binds the Member Identity and Roles together to limit the access on google cloud resources.

Member can be of the following type

  • Google Account: This can be any valid Google account with gmail.com or with any other domain name.
  • Service Account: Account related to the application rather than an individual, you can have as many numbers of service account for the logical components of your application.
  • Google Group: Google group are the collection of the different Google account and service account. Every group has a unique email id which can be used to identify members in the IAM policy. The benefit of group account is that if you want to change the permission of user you can simply move the user from one group to another group rather than changing the permission of the users.
  • G Suited domain: Is the virtual group of all the account created in the organization Suite.

Roles on other hand is collection of permission which is mainly represented as .., for example pubsub.subscriptions.consume. Permission determines what type of operation can be performed on resources. Permission cannot be directly applied to resources instead you can assign roles which are a group of different permission.

In Google Cloud Platform Roles are of three kinds

  • Primitive Roles
  • Predefined Roles
  • Custom Roles

Primitive Roles

These are of three types Owner, Editor and Viewer as the name suggest.

  • Viewer has only access to view the resources and data.
  • Editor has Viewer permission + permission to change/edit the resources.
  • Owner has the permission of editor + permission to manage all resources and user.

Predefined Roles

These are the roles provides by Cloud IAM in addition to primitive roles which provide more granular level access to the resources and these primitive roles differ based on different resources in the cloud, you can check these roles over here

Custom Roles

Cloud IAM let the user define different custom roles if primitive and predefined roles do not fulfill their requirements. Though there is some pointer to remember while creating the custom roles. Custom roles can be defined on Organization and Project level but not on Folder level and custom roles should have an iam.roles.creator

So now the question is how these rules actually work as we know that policy is the binding list which binds the member and roles. These policies are connected to resources and are enforce access control when these resources are accessed.

Google Cloud Policy have a hierarchy Organization > Folder > Project > Resources, every resource has exactly one parent and inherit the policy from its parent. Any policy assign on the parent is applied to all its child's. Here is the diagram from Google Cloud IAM docs which show how this hierarchy looks.


Here an example from official docs how permission hierarchy works,

In the diagram above, topic_a is a Cloud Pub/Sub resource that lives under the project example-prod. If you grant the Editor role to micah@gmail.com for example-prod, and grant the Publisher role to song@gmail.com for topic_a, you effectively grant the Editor role for topic_a to micah@gmail.com and the Publisher role to song@gmail.com.

So here my effort to explain Google IAM policy in simple words, Hope you find it usefully. Please do share any feedback or any topic you think I should cover in this post. Till then Happy Clouding :)

Refernces:

-date: 2019-05-22

Let says you are working on a project and you have dependencies of code from some another repository, which you need in your project code.

  • One way is to copy the code from another repository manually to yours whenever it gets update not so good way :sad:
  • Another way is to use git version control system to do that for you and it's super easy to do :smile:

Let me show you how we can do it.

Fetching from Another Repository

git add remote other <repository_link>
git fetch other
git checkout <your_target_branch>
git checkout -p other/target-branch file_path

If you have multiple files you have to just change the last checkout statement to

git checkout other/target-branch file_path1 file_path2

But wait there is one catch here that is the path of the file in your repository should be a mirror image (same as) of the path of the file in another repository you are fetching from.

Fetching from Same Repository

Now If you want to fetch files from another branch of the same repository you have to just do

git checkout other_branch_name file_path1 file_path2

I have to admit that it has been three years now working with git, but it still excites me that there is a lot of things that I do not know about git, If you also have some important time-saving git commands which you feel can save someone else time to share in comment because sharing is caring :sunglasses: .

Cheers!

Happy Coding

-date: 2019-05-19

As part of my job, I have to scrape some website to help our sales team with data on the market, as of now they were doing it manually which is a bit of tedious job to do and consumes a lot of their productive time. So on bit searching and going through different tools and framework came across a framework named Scrapy. So here I am going to share how to set up and use Scrapy.

Scrapy is a free and open source web-crawling framework written in python which is used to extract data from a website without much of hassle. They have a very nice documentation you can check out here.

Steps to Install Scrapy

sudo apt-get install python-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev pip install Scrapy

Steps to Create New Project

To create a Scrapy project type this command in your terminal.scrapy start project <project name>. Project structure will look like this

Now go ahead and create a python file at path /spiders and paste below code.

#!/usr/bin/env python3
import scrapy

class RedditSpider(scrapy.Spider):
    # name of the scrapper, it should be unique.
    name = "reddit"
    # list of the URL need to be iterated.
    start\_urls = \['https://www.reddit.com/'\]

    # Called to do any operation on the response of the above URL.
    def parse(self, response):
       # css selector of the anchor tag which contains the headers
       top\_post = response.css("a.SQnoC3ObvgnGjWt90zD9Z")
       for post in top\_post:
           self.log(post.css('::text').extract\_first())

To start scrapping, type

`scrapy crawl reddit`

Here we are scrapping the Reddit website for the latest post and getting the header of all the post. The output of the above code will look like this.

  • Trump Organization ‘Sold Property to Shell Company Linked to Maduro Regime,’ Says Report
  • Blind people of Reddit, what do you find sexually attractive?
  • A “caravan” of Americans is crossing the Canadian border to get affordable medical care
  • A “caravan” of Americans is crossing the Canadian border to get affordable medical care
  • [Post Game Thread] The Houston Rockets defeat the Golden State Warriors, 112-108, behind Harden's 38 points to level the series 2-2, despite the continued brilliance of Kevin Durant 18, my friend here is failing biology and thinks she's unroastable. Go for it guys, and go hard If you strike me down, I shall become more powerful than you can possibly imagine. [BOTW]
  • ELI5: Why are all economies expected to “grow”? Why is an equilibrium bad?
    ....

Now the best part of Scrapy is if you want to experiment around any website before creating any project you can easily do that.

scrapy shell 'https://www.reddit.com/'

And then can try a different CSS selector on the response. Though there is a lot more you can do with Scrapy like saving the result in JSON, CSV format and even integrate with Django project might show that in next post, till then goodbye.

Cheers

—date: 2019-10-03 originally posted here

Generators is a function in which objects are created at once but not all code is executed at once as done in normal function. In normal function execution from top to the return statement. A function that consists of a yield statement is called a generator's function. The execution of the generator function happens differently, in which the code execution stops at the yield statement rather than a return statement, to move to the next statement next() method is called which will start the execution of the code from where it is left over. If no yield statement is found a StopIteration exception is raised.

So let's see how to create, execute a Generators in python.

def fib(n):
    a, b = 0, 1
    while a <= n:
        yield a   # yield statement.
        a, b = b, a + b

Now let execute the method fib().

fib_fun = fib(10)
next(fib_fun) # 0
next(fib_fun) # 1
next(fib_fun) # 1
.
.
.
next(fib_fun) # 8
next(fib_fun) # reached the end will raise StopIteration Error.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration

Else you can use for loop which call next() in the background.

for fib\_value in fib(10):
    print(fib)

# Output
0
1
1
2
3
5
8

So here we today understand the Generators concept in python. Now you would be thinking where we can use this, let me state some use cases.

  • Can be used for memory management, where we pass the whole list at once, we can use Generator to pass data one by one so that less load comes on memory.
  • Generator can be used to define infinite streams.

If you know any more use case, please do share in the comments and if want to share something else or talk about Generators feel free to ping me on twitter

Till then Cheers :)
Happy Digging.