dgplug member blogs

Reader

Read the latest posts from dgplug member blogs.

from sandeepk

In the previous blog post, we have discussed F() Expression, we will now explore more query expression in Django, to name few that we will discuss in this post are

  • Func() Expression
  • Subquery Expression
  • Aggregation () Expression

Func() Expression Func () Expression is the base of all the expressions and can be used to create your custom expression for the database level function.

# The table that we using for our query is the *Student* which keeps records of the students for the whole school.

from django.db.models import F, Func
student_obj = Student.objects.annotate(full_name=Func(F('first_name') + F('last_name'), function='UPPER')

# This will give a student object with a new field that is *full_name* of the student in upper case.

Subquery Expression Subquery are like nested condition in the query filter which helps you to make a complex query into a clean concise query. But you need to know the order of the sequence the query will be executed to use effectively. While using a Subquery you will also need to know about the OuterRef, which is like an F() Expression but points to the parent query value, let see both Subquery and OuterRef in action

# you are given a task to get the name of the student whose name starts with *S* and whose fees are due.

from django.db.models import OuterRef, Subquery
fee_objects = Fees.objects.filter(payment_due_gt=0)
student_obj = Student.objects.filter(name__startswith='S').filter(id__in=Subquery(fee_objects.values('student_id')))

# Get the lastest remarks for the students
remark = Remark.objects.filter(student_id=OuterRef('pk')).order_by('-created_at')
student_obj = Student.objects.annotate(newest_remark=Subquery(remark.values('remark_strl')[:1]))

Aggregation () Expression

Aggregation Expression is the Func Expression with GroupBy clause in the query filter.

# get the total student enrolled in the *Blind Faith* subject.

student_obj = Student.objects.filter(subject_name='blind_faith').annotate(total_count=Count('id'))

Note: All queries mentioned above in the code are not tested. So if you see any typo, a query that does not make sense, feel free to reach out to me at sandeepchoudhary1507[at]gmail[DOT]com.

Cheers!

#100DaysToOffload #django #python

 
Read more...

from sandeepk

While working from home one of the issue I faced that my laptop battery charger adapter is always remain plugged in almost all the time, due to which I have to replace my laptop battery. To deal with the problem I have now written a script to notify me about the laptop battery charging level if it goes above 85 % and below 20%.

#! /bin/bash                                                                                                                                          

while true
do

    battery_level=`acpi -b | grep -o '[0-9]*' | sed -n  2p`
    ac_power=`cat /sys/class/power_supply/AC/online`
    if [ $ac_power -gt 0 ]; then
        if [ $battery_level -ge 85 ]; then
            notify-send "Battery Full" "Level: ${battery_level}%"
        fi
    else
        if [ $battery_level -le 20 ]; then
            notify-send --urgency=CRITICAL "Battery Low" "Level: ${battery_level}%"
        fi
    fi
    sleep 120

done

The important commands which I want to break down to explain actually what they are doing.

  • First is acpi which tell us about the battery information and other ACPI information
  • grep command is used to extract the integer value from the acpi command
  • sed is used to get the second value from the grep result.
>> acpi -b
Battery 0: Discharging, 54%, 02:03:37 remaining
>>acpi -b | grep -o '[0-9]*'
0
54
02
04
41
>>acpi -b | grep -o '[0-9]*' | sed -n  2p
54
  • After that, we check that the charger is plugged in or not, based on that we check that the battery level does not exceed the described limit, if so is the case send the notification.
  • Then we check for the battery low indication which sends the notification if the battery level less than 20 %.
  • These condition is put in a continuous loop to check after a sleep time of 120s.

To make this script run automatically you have to assign the execution permission and specifies the execution command in the ~/.profile and reboot the system.

>> sudo chmod +x /path/to/battery-notification.sh

you can find my notes on shell commands here

Cheers!

#100DaysToOffload #automation #scripts

 
Read more...

from sandeepk

What is the F() Expression? First let me explain to you what are Query Expressions are, these expressions let you use value or computation to be used in the update, create and filters, order by, annotation, aggregation. F() object represent the value of the model fields or annotated columns. It lets you help to not load the value of the field into the python memory rather directly handles in the Database query.

How to use the F() Expression? To use the F expression you have to import them from the from django.db.models import F and have to pass the name of the field or annotated column as argument, and it will return the value of the field from the database, without letting know the python any value. Let some example.

from django.db.models import F

# Documents is the table which have the details of the document submitted by user from the registrey portal for GYM membership

# We need update the count of the document submitted by the user with pk=10091

# without using F Expression

document = Documents.objects.get(user_id=10091)
document.document_counts += 1
document.save()

# Using F expression
document = Documents.objects.get(user_id=10091)
document.document_counts = F('document_counts') + 1
document.save()

Benefits of the F() Expression.

  • With the help of F expression we can make are query clean and concise.
    from django.db.models import F
  document = Documents.objects.get(user_id=10091)
  document.update(document_counts=F('document_counts') + 1)

   #Here we also have achieved some performance advantage
    #1. All the work is done at database level, rather than throwing the value from the database in the python memory to do the computation.
    #2. Save queries hit on the database.
  • F Expression can save you from the race condition. Consider a scenario where multiple user access your database and when bother user access the Document object for the user 10091, the count value is two, when user updates the value and save it and other user does the same the value will be saved as three not Four because when both user fetches the value its two.

  # user A fetch the document object, and value of document_counts is two.
  document = Documents.objects.get(user_id=10091)
  document.document_counts += 1
  document.save()
  # after the operation value of document_counts is three

  # Code running prallerly, User B also fetch the object, and value of document_counts is two.
  document = Documents.objects.get(user_id=10091)
  document.document_counts  += 1
  document.save()
  # after the operation value of document_counts is three

  # But actually value should be Four, but this is not the case using F expression here will save use from this race condition.
  • F Expression are persistent, which means the query persist after the save operation, so you have to use the refreshfromdb to avoid the persistence.
  document = Documents.objects.get(user_id=10091)
  document.document_counts = F('document_counts') + 1
  document.save()

  document.document_validation = 0
  document.save()

  # This will increase the value of *document_counts* by two rather then one as the query presists and calling save will trigger the increment again.

  • More example of F Expression in action with filter, annotate query.
from django.db.models import F

# annotation example
annotate_document = Document.objects.filter(created_by=F('created_by_first_name') + F('created_by_last_name')


# filter example
filter_document = Documents.objects.filter(total_documents_count__gt=F('valid_documents_count'))

That's all about the F Expression, it is pretty handy and helps you improve the performance, by reducing value loading in the memory and query hit optimization, but you have to keep an eye over the persistent issue, which I assume will not be the case if you are writing your code in structured way.

Cheers!

#100DaysToOffload #django #python

 
Read more...

from mrinalraj

According to wiki “ Cloud computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user”

Benefits it has brought

With the benefits of Iaas(Infrastructure as a Service) , Paas(Product as a Service) we can now use the service on the go without any headache of maintenance of the facility.

The Difference

Differnces

 
Read more...

from sandeepk

Journaling is a great way to keep track of your progress and emotional state, using the same journaling principle in managing your finance can be of great help to see how money flow in & out of your life :).

So, here we will explore how to journal in Emacs or any editor of your choice with the help of the tool ledger.

Before starting, let's get familiar with the basic terminologies.

  • Assets – It's the money that you have.
  • Liabilities – It's the money that you own, or you can say Debt.

Ledger is the double-entry accounting software, which means that you have to mention the flow in of the money[where from it comes like Saving account, Credit Card] and flows out of the money[expenditure, Investment, shopping] and all these entries should balance out each other and result should be zero if it's not the case there is an issue in your entries. The best part of the ledger software is that it allows you to manage your data in a simple text file and don't alter your data, and you have all your data with you.

So, Ledger read the simple text file and generate all kinds of the report that you need. Emacs comes in the picture to manage these text file and give a solid way to manage these file with org-mode also, that we will discuss some other time.

Now get ready to set up the journal system.

Installing the Tools

  • Download the Ledger on your system based on your OS from here — for the lazy ones on Ubuntu OS, you can follow the steps given below.
$ sudo add-apt-repository ppa:mbudde/ledger
$ sudo apt-get update
$ sudo apt-get install ledger
  • Open your Emacs editor and then follow these steps.

    • Press Alt-X package-install [Enter Key]
    • Type ledger-mode [Enter Key], this will install the ledger-mode package
    • Open your Emacs config file and paste this snippet. We are telling the ledger-mode to activate for the file extension of .dat file.

      (use-package ledger-mode
           :ensure t
           :init
           (setq ledger-clear-whole-transactions 1)
      
           :mode "\\.dat\\'")
      

Ledger Mode in Action

  • Create a file with the extension .dat and open it in Emacs.
  • Press Ctrl-C Ctrl-A to enter an entry in the file.
    • This will ask for the date for the entry, afterward press enter.
    • Give a nice heading to your Ledger entry and add your expense.

Ledger entry example – It's up to you how you want to maintain your journal, Some entries example to sort out your expenses.

Ledger entry example – You can also plan your budget in the ledger and can automate the transaction, if you are geeky enough you can write a code to read the spreadsheet shared by your bank to populate the ledger. – Ctrl-C Ctrl-O Ctrl-R for report generation, you can find more about reports here

#100DaysToOffload #financial-freedom #emacs

 
Read more...

from sandeepk

—date: 2019-07-08 originally posted here

Identity & Access Management let the user manage access control/policies to the resources by defining who(identity) and what they can access(roles). Today we will talk about the Google Cloud Identity & Access Management and understand what it is and How to use it.

Policy in IAM is composed of the binding list which binds the Member Identity and Roles together to limit the access on google cloud resources.

Member can be of the following type

  • Google Account: This can be any valid Google account with gmail.com or with any other domain name.
  • Service Account: Account related to the application rather than an individual, you can have as many numbers of service account for the logical components of your application.
  • Google Group: Google group are the collection of the different Google account and service account. Every group has a unique email id which can be used to identify members in the IAM policy. The benefit of group account is that if you want to change the permission of user you can simply move the user from one group to another group rather than changing the permission of the users.
  • G Suited domain: Is the virtual group of all the account created in the organization Suite.

Roles on other hand is collection of permission which is mainly represented as .., for example pubsub.subscriptions.consume. Permission determines what type of operation can be performed on resources. Permission cannot be directly applied to resources instead you can assign roles which are a group of different permission.

In Google Cloud Platform Roles are of three kinds

  • Primitive Roles
  • Predefined Roles
  • Custom Roles

Primitive Roles

These are of three types Owner, Editor and Viewer as the name suggest.

  • Viewer has only access to view the resources and data.
  • Editor has Viewer permission + permission to change/edit the resources.
  • Owner has the permission of editor + permission to manage all resources and user.

Predefined Roles

These are the roles provides by Cloud IAM in addition to primitive roles which provide more granular level access to the resources and these primitive roles differ based on different resources in the cloud, you can check these roles over here

Custom Roles

Cloud IAM let the user define different custom roles if primitive and predefined roles do not fulfill their requirements. Though there is some pointer to remember while creating the custom roles. Custom roles can be defined on Organization and Project level but not on Folder level and custom roles should have an iam.roles.creator

So now the question is how these rules actually work as we know that policy is the binding list which binds the member and roles. These policies are connected to resources and are enforce access control when these resources are accessed.

Google Cloud Policy have a hierarchy Organization > Folder > Project > Resources, every resource has exactly one parent and inherit the policy from its parent. Any policy assign on the parent is applied to all its child's. Here is the diagram from Google Cloud IAM docs which show how this hierarchy looks.


Here an example from official docs how permission hierarchy works,

In the diagram above, topic_a is a Cloud Pub/Sub resource that lives under the project example-prod. If you grant the Editor role to micah@gmail.com for example-prod, and grant the Publisher role to song@gmail.com for topic_a, you effectively grant the Editor role for topic_a to micah@gmail.com and the Publisher role to song@gmail.com.

So here my effort to explain Google IAM policy in simple words, Hope you find it usefully. Please do share any feedback or any topic you think I should cover in this post. Till then Happy Clouding :)

Refernces:

 
Read more...

from mrinalraj

Today I am very much excited to share with you my first song video on the Dgplug platform via YouTube 'Zara Zara'.

Do you know what gives more satisfaction in life? Is it academics or getting an engineering degree? ;D Mostly it is Nurturing your skills and finding a platform to present yourself.

For me publishing my song gives me more satisfaction. In the end, when the time will come to retire it will not be the CGPAs but the risky path in chasing your passion and the crazy things we did with friends that will keep us smiling :)

 
Read more...

from mrinalraj

Yesterday on 1st October was the Foundation Day of The Scriptink. I always imagined how easy it seems to maintain the project made apart from the idea of any normal college projects. The audience always sees only the result and not the path. Coming to the point, Me being in Scriptink I came to know how hard is it to keep maintaining the consistency. U may be wondering, consistency in what way.

Definitely, You would be following our Scriptink through the app, youtube LinkedIn page. More than 2 years of consistency in posting the monthly short videos is a great achievement in itself. This is what the audience sees. But inside a team is working day and night to make this 2 min short video possible.

Not much to say, but Scriptink is all about us and not about I.

Click on this to experience the 2 years journey with us

 
Read more...

from mrinalraj

It was one of the best decisions to invest some time with DGPLUG summer training 2019 as I partially owe them my job.

During my virtual on-campus interview, I mentioned what I did during my summer vacation and bonus skills like blogging, helped me land an offer in a multinational company with a networking role.

I was asked what are my hobbies. Mostly everyone begins to answer like I love to sing and so on... I began explaining to them about my blogging and to prove them I briefly shared the story about Edward Snowden and the Internet's Own Boy, Aaron Swartz, which gave me an edge over other participants.

Once again, Thanks DGPLUG :)

Would love to connect with you.

 
Read more...

from mrinalraj

Every year JPMorgan Chase & Co. conducts Code for Good Hackathon at a large scale to hire fresh graduates from the engineering college which every student keeps an eye on...

Luckily, I was one of the shortlisted candidates for the current Code for Good Hackathon 2020. This article is for my friends who are waiting for some insights on their upcoming CFG Hackathon.


Welcome

Welcome


Key Points:

  • The first day was mainly for jelling up with teammates. You need to express what you are good at and decide among teammates to come up on the same page.
  • The ** second-day**, I liked it the most. That was a workshop on technical skills like a web page and Git usage. Also, it's better if you have early knowledge of git and you should at least have an idea on how to solve the merge conflict.
  • NOW, the third day, THE D-Day, Basically the main Hackathon where you will Brainstorm + Code will be of only 24 hours.

Try to focus on developing the main requirement posed by the non-profit NGO's. Keep on updating about your work to your teammates and mentors so that they can help you if you get stuck on it for more time.


Schedule

Schedule for 3 days Hackathon


Skills to focus:

  • Communicate effectively with teammates and mentors. As we all the sailors with a common goal to reach the surface.
  • Knowledge of HTML, CSS, Bootstrap + any app-related knowledge is plus.
  • Knowledge of SQL and XAMP server.
  • Basic knowledge of git push, git pull, git merge, and ability to solve the merge conflict. It is advisable to try collaborating on a dummy file with your friend before entering into the hackathon.

Our Presentation on The Nudge Foundation: Click here for Presentation and Demo

Conclusion

Hackathon is not only for hacking into problems but also knowing new faces, sharing stories, and building memories.

The End is the new beginning


 
Read more...

from pradhvan

Last weekend I attended EuroPython sprints that were virtually conducted. The communication platform for the conference was discord and was kept the same for the sprints too. It served a good platform as we were able to pair program with the maintainer by sharing our screens.

Day 1

Sprints opened at 12:30 PM IST and started with its first round of project introduction. A total of 12 projects that took part in this year's sprint. Though the project maintainers were from varied timezone and timezones are difficult to handle. The first opening of sprints only had a few maintainers of the project to talk about their project.

The project that I started off in the day one of the sprints was terminusdb. I primarily contributed to terminudb's python client which had Cheuk Ting Ho and Kevin Chekov Feeney to help us out. Kevin had coded the JS Client of the project and was here to work on the Python Client.

The issue I picked up was increasing the test coverage of the project and while working on that issue I also discovered some other issues. Some depreciated function was still being used in the client and the make file did not have a command to generate coverage HTML of the project.

By the end of day one, I had moved the coverage of terminusdb_client/woqlclient/connectionConfig.py to 70% from 62% with a PR to remove the deprecated function from the client. Doing that I learned about graph databases and how terminusdb has git like features for the database.

Day 2

I started late on the second day and continued to work on the test coverage PR. I fixed some minor flake8 errors in my test coverage PR and pushed to coverage to 75% and created a PR for that make file command. A lot of people in sprints were confused in setup of project. So opened up a documentation issue for writing the wiki for setup instructions and contributions guidelines for new/first time contributors.

Just an hour before the first closing session I moved to scanapi which is maintained by Camila Maia. I picked up some good first issues and got them merged in no time. I saw this project at the closing of the day-1 and found it very interesting.

The other projects that I really found interesting but could not contribute to were Hypothesis, strawberry GraphQL and commitizen.

Overall I had a really fun weekend and I am excited to contribute more to those projects.

 
Read more...

from pradhvan

I recently stumbled across a very peculiar topic called Bit Manipulation. In most of my programming days, I haven't actually relied on the binary operation to get me the result, I know under the hood everything is converted into 0's and 1's but it was all abstraction to me.

The case was different here. While working with Bit Manipulation, I had to actually rely on arithmetic bit operations to get me to the result. So it became real interesting real soon.

Bitwise operators

Basic operation done on bits are done with bitwise operators. Since we primarily work on bits these operations are fast and are optimized to reduce time complexity.

The first three &, | and ~ are fairly straightforward. So I would briefly go over it.

&: if both bits are of equal size than & operator would compare each position and would return True/1 if input bits are True/1. Similarly for False/0.

    6       : 1 1 0
    5       : 1 0 1
            -------- &
              1 0 0

|: if both bits are of equal size than & operator would compare each position and would return True/1 if input bits differ. Similarly for False/0.

     5       : 1 0 0
     3       : 0 1 1
            --------  |
              1 1 1

~: Not operator just compliments the bit it gets. In fancy computer lingo it gives one’s complement of a number.

    5       : 1 0 1
            -------- ~
              0 1 0

Now coming to more interesting operators:

Operator Name
^ XOR
>> Right Shift
<< Left Shift
XOR

If two bits are of two equal-size ^ of both bits in the compared position would be 1 if compared bits are of different binary and would be 0 if bot the compared bits are the same.

    6       : 1 1 0
    5       : 1 0 1
            -------- ^
              0 1 1
  • XOR of a number with itself is 0

    x = "Any int number"
    (x ^ x) == 0
    
  • XOR of a number with 0 is number itself.

    (x ^ 0) == 0
    
  • Ordering in XOR does not matter, both will give the same output.

    output = (7 ^ 3) ^ (5 ^ 4 ^ 5) ^ (3 ^ 4)
    output = 7 ^ (3 ^ (5 ^ 4 ^ 5)) ^ (3 ^ 4)
    

While discussing Left Shift,<< and Right Shift, >> we will be talking about arithmetic shifts.

Left shift <<

  • Left shift shifts the binary digits by n, pads 0’s on the right.
  • Left shift is equivalent to multiplying the bit pattern with 2 power k( if we are shifting k bits )
1 << 1 = 2 = 1 * (2  ** 1) 
1 << 2 = 4 = 1 *(2  ** 2) 
1 << 3 = 8 = 1 * (2  ** 3)
1 << 4 = 16 = 1* (2  ** 4)
…
1 << n = 2n

Right shift >>

  • Shifts the binary digits by n, pads 0's on the left.
  • Right shift is equivalent to dividing the bit pattern with 2k ( if we are shifting k bits ).
4 >> 1 = 2
6 >> 1 = 3
5 >> 1 = 2
16 >> 4 = 1

Both Right shift and Left shift operators come real handy in masking.

Masking allows the user to check/change a particular bit at a particular position.

Some of the common functions associated with masking are:

Set Bit
  • The set bit method is generally used to SET a particular with 1.
  • To achieve this we would need to create a mask at the particular position where we want to SET
  • The mask can be created with the help of the << if the left shift operator.
def set_bit(x, position):
    mask = 1 << position
    return x | mask

set_bit(6,1)
  • In the above code snippet we are SETing the bit at 0th index.
    masking = 1 << 0 = 1 * (2 ** 0) 
    
    6       : 1 1 0
    1 << 0  : 0 0 1
            -------- |
              1 1 1
IS BIT SET
def is_bit_set(x, position):
    shifted = x >> position
    return shifted & 1
Clearing Bit
def clear_bit(x, position):
    mask = 1 << position
    return x & ~mask
Flip Bit
def flip_bit(x, position):
    mask = 1 << position
    return x ^ mask
Modify Bit
def modify_bit(x, position, state):
    """
    state is param that tells us to set a bit 
    or clear a bit
    """
    mask = 1 << position
    return (x & ~mask) | (-state & mask)

Observations

Bit manipulation can be used to solve problems that you are familiar with but necessarily don't know about. Here are some of my observations that I noted while using bit manipulation.

To check if the number is even
  • & ANDing the number with 1 gives 0 or 1 — 0 if it's even — 1 if it's odd
x = "Any int number here"
(x & 1) == 0

Practice Question

To check if the number is a power of two
  • If a number is x binary representation of (x-1) can be obtained by simply flipping all the bits to the right of rightmost 1 in x and also including the rightmost 1.
Let, x = 4 = (100)2
x - 1 = 3 = (011)2
Let, x = 6 = (110)2
x - 1 = 5 = (101)2
  • x & (x-1) will have all the bits equal to the x except for the rightmost 1 in x. In the given example below the values enclosed in || are the same for both the x and x-1 if x is not the power of 2.
  • If the number is neither zero nor a power of two, it will have 1 in more than one place.
Let, x = 6 = 1|1|0
(x- 1) = 5 = 1|0|1

Let,x = 16 = |1|0000
(x-1) = 15 = |0|1111

Let,x = 8 = |1|000
(x-1) = 7 = |0|111

Let,x = 23 = 1011|1|
(x-1) = 22 = 1011|0|
x = "Any int number here"
(x & x-1) == 0

There are a lot more things that can be done with just bits and are definitely not limited to the above observations. Try to find your own observations. Happy coding!

 
Read more...

from pradhvan

I recently finished reading Python Testing with Pytest by Brian Okken and I am glad I picked this up rather than jumping into the docs. It's definitely a good introduction for people who haven't had their share of testing a python codebase, let alone be with Pytest.

The book introduces a python CLI called Tasks and takes this as a base for writing all of its tests throughout the course of the book. Though eventually, the tests become more complex when you get into the latter half of the book.

The pros of the book are that it covers almost every section of the framework from fixtures, plugins, custom pytest configuration and even using pytest with tools like coverage and mock. But if you're someone like me who hasn't had his share of testing a python codebase you might find yourself with a bit of information overload at times.

I did find the book a bit of overwhelming on chapters like writing your own plugin, custom configuration and using pytest with Jenkins because these are the features that I wouldn't be using right out of the box. I would definitely be coming back to these chapters in the future if I need any of the features.

Overall the book is really well-written keeping in mind beginners who are just picking up pytest as their first testing framework and also for folks who are moving towards pytest from any other testing framework. Exercises at the back of every chapter make sure you also get some hands-on experience of writing tests.

Just a personal tip for anyone who is picking this up and has less experience with pytest. Feel free to skip chapters or skim chapters that aren't useful right out of the box. You can always come back to them when you need those features.

 
Read more...

from pradhvan

2019 has been a year of new beginnings both personally and professionally. This was the year I got my first job, the first salary and on the contrary to that, I did give my first resignation. Yeah, that was fun!

This blog just highlights most of the things I did in the previous year.

Blog Posts

I did post out 8 blogs this year. I know it's not that much. Initially, I had planned one blog a month. But by the end of the year during the time I was giving interviews for the new job things started to fall and I could not commit to one blog a month.

The plan for this year is to blog more or at least be consistent with writing. Stick to at least one blog per month.

Books

The previous year was a good reading year compared to the last few years. The Kindle I bought came real handy during the long metro rides. Plus I got some tech books cheap compared to their paperback prices so I did finish some of them too.

This year I started to take up reading non-tech books a bit more seriously. So I am picking up a book a month and finishing it slowly. Keeping in consideration that the book is less than 800-1000 pages for the initial months just to help in making a momentum.

Recently finished Parliamental and will be moving to The Elephant Vanishes.

Talks

I did give one talk at PyConf Hyderabad 2019 one of my favorite regional conferences in India. I also did submit one for a PyDelhi meetup but sadly by the time, it was scheduled I had already relocated. More on that later.

Open Source Contributions

One of the major things that I want to work towards this year is towards making more upstream contributions.

Last year I did submit two document patches to one org aio-libs . The project was aiopg, async version of Postgres but that happened by sheer luck. As I was going through the documentation I found some of the documentation to be using old-styled decorator based coroutines instead of new async def function. So I submitted a patch to update them.

 
Read more...

from abbisk

Free Software

“Free” software “is software that can be used, studied, and modified,” copied, changed with little or no restriction, and which can be copied and redistributed in modified or unmodified form. Free software is available gratis (free of charge) in most cases. “In practice, for software to be distributed as free software, the human-readable form of the program (the source code) must be made available” along “ with a notice granting the” user permission to further adapt the code and continue its redistribution for free. This notice either grants a “free software license”, or releases the source code into the public domain.

Open-Source Software

In the beginning, all software was free in the 1960s, when IBM and others sold the first large-scale computers, these machines came with software which was free. This software could be freely shared among users, The software came written in a programming language (source code available), and it could be improved and modified. Manufacturers were happy that people were writing software that made their machines useful. Then proprietary software dominated the software landscape as manufacturers removed access to the source code. IBM and others realized that most users couldn’t or didn’t want to “fix” their own software and There was money to be made in leasing or licensing software. By the mid-1970s almost all software was proprietary “Proprietary software is software that is owned by an individual or a company (usually the one that developed it). There are almost always major restrictions on its use, and its source code is almost always kept secret.” users were not allowed to redistribute it, source code is not available users cannot modify the programs. Software is an additional product that was for sale In 1980 US copyright law was modified to include software In late 1970s and early 1980s, two different groups started what became known as the open-source software movement: East coast, Richard Stallman (1985), formerly a programmer at the MIT AI Lab, launched the GNU Project and the Free Software Foundation. “to satisfy the need for and give the benefit of ‘software freedom’ to computer users ultimate goal of the GNU Project was to build a free operating system the GNU General Public License (GPL) was designed to ensure that the software produced by GNU will remain free, and to promote the production of more and more free software.

 
Read more...

from pradhvan

PyCon India is one of those conferences that I look forward to every year. This year marked my fourth conference in a row. I was excited to meet all my old friends and make some new ones.

ChennaiPy the local python user group of Chennai hosted this year's conference. This meant two things I will get to attend the conference in Chennai and also visit some beaches around Pondicherry. So yeah I was super excited.

The journey to the conference started on 11 October, I was traveling from Delhi with two of my friends Kuntal and Sakshi. Since we planned our journey in such a way that we would reach one night before the conference, we missed the pre-conference volunteer's meet. Kuntal and I were in a state of regret of not taking the morning flight as the pre-conference volunteer's meet are super fun. You get to see the venue beforehand, helps out with swag bags and interact with all the volunteers and organizers of the conference.

On reaching the Chennai airport we met with Dedipyaman, he was staying with us. His name was a bit unique so we called him twodee, which he later adopted as his nick. Traveling to our Airbnb apartment was a challenge in itself as none of us knew Tamil. We were staying with 13 other folks, I knew most of them besides one, Shubo. I had seen the nick on the #dgplug but haven't met him in person. When we arrived at the apartment only Shubo was present, rest came in an hour or two. As everyone settled, we played some rounds of Uno while enjoying pizzas just before going to bed.

The next day I left with twodee and Sakshi for the conference, we were running a bit late. When we reached the conference I saw Kuntal at the registration desk. We all got our attendee card and proceeded to the conference. I saw all my old friends, most of them I only personally meet during conferences as they all live in a different state. So it was fun to catch up. After roaming around the sponsors both I went to attend Pradyun's talk. The talk was titled Python Packaging – where we are and where we're headed, I was interested in the talk as only a handful of people maintain pip. Since it's such a huge ecosystem in itself it was interesting to get some insights from Pradyun's talk about how packaging works with pip and how are they planning to move forward. Later in the tea break, I met with Saurav and Haris. I learned a lot from the conversion we had during the tea break. These people have been in tech much before me. Saurav talked about his company Deepsource, how managing a small team with people who take up responsibility is easy. You don't have to worry about those formal things like timesheets, leave policy because people take responsibility for their work. Haris was working in a two-person team and shockingly carried a very old cell phone which didn't even have internet. So his take on life was very interesting.

The next day we had our annual #dgplug staircase meeting, this year since Kushal was sick. Sayan took the initiative of conducting the meeting. We discussed the first staircase meeting, what went wrong in this year's summer training that people weren't completing their tasks, weren't showing up in the IRC channel and what needs to be done now. I meet lambainsaan who I had always thought was a bot.

The meeting concluded at noon and it was just in time for me to catch up the talk “Let's hunt a memory leak” so I ran to the hall to get a good spot. Sanket was the speaker, he showed us various ways how he solved memory leak problems in a flask app in production while describing the whole memory management concept in Python. I rushed for lunch after the talk as I had to be in open spaces for the PyDelhi's session.

Anuvrat had registred the open space for PyDelhi and other communities of the north. The whole agenda of the open space was how to be consistent while conducting the meetup, what can we do in the meetup we get people to come often and how can we increase the quality of the talks. I liked one idea of pushing all the 101 sessions to blog posts or even hangout sessions a day before the event so we aren't limiting the target audience to just people who are starting in tech. Of what we have been observing in the recent meetups, experienced people who can help mentor people and give great talks have stopped attending meetups. The problem is there were a lot of 101 sessions happening. We concluded that we can shift those 101 sessions to blog posts and if someone wants to give a 101 session we can have themed meetups once in one-two months where they can present those talks. The open spaces were scheduled for half an hour but we stretched it a bit longer as more people started adding points to the discussion.

Before the closing keynote of the day I helped in volunteering at hall-B, I was so much excited for the keynote that during the tea break before the closing keynote I went and sat in the second row of the hall just so I can enjoy the talk from a good spot.

The conference ended with David Beazley keynote, he live coded a stack machine, wrote an interpreter for Web Assembly game that was initially written for Rust in Python and in the end added PyGame to make it into an actual game. It was a jaw-dropping moment for me, though I lost in the midway of his talk it was a bit advanced for me. But when I looked around most people were feeling the same. The keynote ended with standing ovation from all the people in the hall. For me, the whole closing keynote was like a movie it was such a joy to just watch David live code and nothing could have been a better way to end a conference.

The last day of our stay in Chennai was a bit weird as there was some issue with water in our apartment so we went a bit late to the workshop. I had bought tickets for David's workshop “Write your own Async”. In the workshop I tried to follow up with him, was writing code just as he would do it but after the second half, I was a bit lost so I just focused on listening to him. It was not exactly like a workshop but more of him giving us a problem and we would discuss the solution to it and he would live code the solution after the discussion. The solutions were so well designed that it would be similar to the inbuilt functions that the Async module has. As I tried to live coded with him so wasn't able to make some detailed notes that I could revisit later.But luckily he uploaded the workshop screencast so I can revise the concepts again.

The day ended with me saying goodbye to all the people that had stayed late during the dev sprints as workshop and devsprints were happening in parallel.

This marked the end to one more year of my PyCon India journey. It was my fourth PyCon India and the most special one. I stayed with people that I look up to in real life and had lots of fun. The funny thing is not all of them use Python as their day to day language yet they came to a conference dedicated towards the language. I guess that's the beauty of the community. You meet so many people from different backgrounds and learn from them which not only helps you be a better developer but also gives a different perspective towards your life.

 
Read more...