dgplug member blogs


Read the latest posts from dgplug member blogs.

from Pradhvan

For the last couple of weeks I have been actively job hunting and in one of the place I interviewed I was asked if I can do take-home assignment in Go. Since I had already mentioned in the interview I had been learning Go on the side. I agreeded finishing the assignment in Go thinking this might be a good challenge to take on and by doing this I might have a chance to showcase my ability of picking up new things quickly.

Plus having a deadline to complete the take-home assignment in a week could also accelerate my Go learning, as it provides a time based goal to work towards.

Gopher Coffee Gif

So for a week I spend most of my time reading parts of Learning Go, refering to tutorials Learn Go with Tests, watching old episodes from justforfunc: Programming in Go, a lot of stackoverflowing and talking to friends who use Go as primary language at work to get some pointers(no pun intended) on best practices, blogs to follow etc.

After a week, I finished writing gogrep, a command line program written in Go that implements Unix grep like functionality.

gogrep has a fairly small subset of features it can search a pattern from a file or a directory and also has support for flags like

  • -i : Make the seach case sensitive.
  • -c : Count number of matches.
  • -o : Store the search results in a file.
  • -B : Print 'n' lines before the match.

While adding these features I also added a fair bit of tests to the project to see how tests are written in Go.

At the moment the test coverage of the entire project is around 72.0% where all the major packages that server as helper or have buisness logic have a coverage greater than 83% and a very few have 100%.

Since gogrep is my first Go project, I do have some loose opinions about the language and the tools it offer. Primarly from the prespective of Pythonista, who has been using Python as a primarly language for the last four years.

  • Go does not have classes and it's probably good: If you are coming from a background of classes and methods, you can relate some of it with receiver functions. Receiver functions come very close to behaving like methods for a class. Having written a few receiver functions to for a struct the straight forward use case becomes a no brainer but it might get a bit complicated once you start refactoring a code the second or the third time.

  • Poniter! Oh my! : If you have every done pointer airthmetic in C/C++ you know the pain. Thank god in Go you don't have to deal with pointer airthmetic. You still have to be careful since you are dealing with memory addresses and it can get complicated but it's defenitely not that bad. You get use to it.

  • Tooling: I coded the entire project in VSCode and I loved the Go support it has, go-static check suggestion for code snippetes are quite helpful and for tests automatically changing a color of the test file to red at that instant when the associated function is changed, the whole Developer Experience is amazing. I have heard even better things for GoLand by Jetbrains from folks who write Go daily, can't wait to try that out. Besides that gofmt, golint, go vet and go test were some other things that I found to be really handy. I did not play around much with the debugger so can't comment much on that.

  • Makefile to the resuce: Since go gives you a single binary in the end you have build it every time to check you code changes. Having some automated way like a Makefile makes things really easy. So I would suggest during your initial project setup do invest in making a good automated process for building and checking your changes.

  • Error handeling pattern: Go has this pattern for handling errors which is very straight forward but sometimes it might seem a bit too much in terms of repetitve code.

if err != nil {
// Do something here to log/return error

This seems to be the only way to handle error at least that I know of. So most of my code was be sprinkled with a lot of these statements if err != nil in the top layer of the abstraction, for gogrep it was the main.go when I was calling func ParseFlags() for processing arguments.


// main.go

conf, output, err := parseflag.ParseFlags(os.Args[0], os.Args[1:])

if err == flag.ErrHelp {
    // Print the usage of the CLI
} else if err != nil {
    fmt.Println("Error: \n", err)

func ParseFlags(searchWord string, args []string) (config *Config, output string, err error) {
 // Supressed code 	
err = flags.Parse(args)
if err != nil {
    return nil, buf.String(), err

 // Supressed code
if len(flags.Args()) == 0 {
    return nil, "", fmt.Errorf("missing argument searchword and filename")

 // Supressed code
return &conf, buf.String(), nil
  • Types magic: Since Go is statically typed that means when defenining the variable I have an option to either just declare it with a type and have a zero value in it or directly initializing it with some value.

Even with functions I had to write a signature of input types and output.

func ParseFlags(searchWord string, args []string) (config *Config, output string, err error)

Having types in the function signature and output became such a great advantage later on both developer experience when refactoring and re-reding code that I wrote days back.

I became a fan of types in the codebase to such an extent that now I am all in for type annotations in my Python code if they can give me similar advantages.

  • Things I skipped: One week is a very small time to peak into a language features. So obviously I skipped a lot of Go features too like interfaces(I might have used them like io reader but for sure haven't written them), generics and defenitely did not touch goroutines and channels. One thing that most of the people praise Go for. There might be more but from a birds eye view I can not think anything else.

Overall, it was really fun week of learning something new and not using something that is already in my toolkit.

If I do more Go project in the future I would defenitely like to do a talk “Go for Python Developers” where I talk about my experinces of building a series of projects in both Go and Python like:

  • CLI tool

  • A key value store

  • A fully featured backend system for ecommerce website/twitter clone/blogging engine that will include REST APIs, relational DB, Redis for caching, RabbitMQ as a queue, async workers for processing, etc. As close I can get to the real thing.

Thus giving out more stronger opinions that I can back and giving a much deeper overview on what to expect when working with both the languages.

Gopher Gif

References: – Gopher Gifs: Images


from Pradhvan

2022 in the hindsight was a rough year for me. The start of the year I was super confident on what I wanted this year to be but by the end of it was as clueless as anyone.

Here are some of highlights of the year in particular order that summarise 2022 for me:

  • Gave two talks this year one at PyCascades 2022 and PyLadies Berlin.

  • Mentored a person to give her first ever talk at EuroPython 2022.

  • Started learning Golang this year and wrote a lot of silly programs in it.

  • Attended an in person conference this year after ages. God how I missed those.

  • Did a two week workcation in Himachal Pradesh, turns out I not that into this workcation thing.

  • Got back into coffee brewing. Aeropress for the win!

  • Picked up Japanese as a language this year failed at it miserably. I hope I am more consistent with it next year.

  • Bonded with a lot of childhood friend whom I thought I had grown out of.

  • Started seeing someone.

  • Sent my parents on a 10 day vacation, which I think is the highlight of the year since I always wanted to do something like this for my parents the day I got my first paycheck.

  • Paid the initial down payment for a car for my parents.

  • Quit the job that I always tried too hard to fit in and was affecting my mental health. Took a big risk here quitting without a new job in hand, hope it plays off in the longer run.

I hope 2023 is a bit kinder to me and brings in a stability.


from mrinalraj


We proudly remember the importance of Gandhiji's role in Bihar's Champaran. Not only farmers received partial relief from the Tyrant British law but also made Gandhiji in realizing the grand dream of taking people in India's quest for Freedom using Non-Violence as a tool.

It was during the British Colonial period of 1917. Farmers were craving individual rights to choose what to produce on the rented lands from landlords. Britishers colluded with these greedy Landlords and Nawabs forcing Famers to indigo production as a condition to get the necessary financial loan while purchasing them at a throwaway price. Bihar farmer's life became a mere joke. Everybody felt the pain. All that needed was a little push.


Seeing the Bihar farmers' deteriorating condition, a moneylender, Raj Kumar Shukla persuaded Gandhiji to come and understand the plight of farmers.


On seeing Gandhiji's survey of farmers' plight, British Raj was very cautious about Developments.

British Raj tried all possible tricks to trap him under chains of court. But it acted as a catalyst and the news fell out in nearby villages.

Gandhiji started the Champaran Satyagraha. Soon this started getting attention. British Raj felt the pressure to stop public awareness from spreading like wildfire and get it sorted out soon.


Finally, the deal with farmers was successful. Not only farmers got the raise but also Champaran Satyagrah proved to be the booming ground for Non-Violence. It was for the first time Gandhiji felt the importance of Non-Violence that if exercised by society can become a possible part of India's Independence.


from sandeepk

The Debug Diary – Chapter I

Lately, I was debugging an issue with the importer tasks of our codebase and came across a code block which looks fine but makes an extra database query in the loop. When you have a look at the Django ORM query

jato_vehicles = JatoVehicle.objects.filter(
).only("manufacturer_code", "uid", "year", "model", "trim")

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
    ymt_key = (entry.year, entry.model, entry.trim_processed)

you will notice we are using only, which only loads the set of fields mentioned and deferred other fields, but in the loop, we are using the field trim_processed which is a deferred field and will result in an extra database call.

Now, as we have identified the performance issue, the best way to handle the cases like this is to use values or values_list. The use of only should be discouraged in the cases like these.

Update code will look like this

jato_vehicles = JatoVehicle.objects.filter(

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
    ymt_key = (entry.year, entry.model, entry.trim_processed)

By doing this, we are safe from accessing the fields which are not mentioned in the values_list. If anyone tries to do so, an exception will be raised.

** By using named=True we get the result as a named tuple which makes it easy to access the values :)


#Django #ORM #Debug


from sandeepk

select_for_update is the answer if you want to acquire a lock on the row. The lock is only released after the transaction is completed. This is similar to the Select for update statement in the SQL query.

>>> Dealership.objects.select_for_update().get(pk='iamid')
>>> # Here lock is only required on Dealership object
>>> Dealership.objects.select_related('oem').select_for_update(of=('self',))

select_for_update have these four arguments with these default value – nowait=False – skiplocked=False – of=() – nokey=False

Let's see what these all arguments mean


Think of the scenario where the lock is already acquired by another query, in this case, you want your query to wait or raise an error, This behavior can be controlled by nowait, If nowait=True we will raise the DatabaseError otherwise it will wait for the lock to be released.


As somewhat name implies, it helps to decide whether to consider a locked row in the evaluated query. If the skip_locked=true locked rows will not be considered.

nowait and skip_locked are mutually exclusive using both together will raise ValueError


In select_for_update when the query is evaluated, the lock is also acquired on the select related rows as in the query. If one doesn't wish the same, they can use of where they can specify fields to acquire a lock on

>>> Dealership.objects.select_related('oem').select_for_update(of=('self',))
# Just be sure we don't have any nullable relation with OEM


This helps you to create a weak lock. This means the other query can create new rows which refer to the locked rows (any reference relationship).

Few more important points to keep in mind select_for_update doesn't allow nullable relations, you have to explicitly exclude these nullable conditions. In auto-commit mode, select_for_update fails with error TransactionManagementError you have to add code in a transaction explicitly. I have struggled around these points :).

Here is all about select_for_update which you require to know to use in your code and to do changes to your database.


#Python #Django #ORM #Database


from sandeepk

Today, we are going to see how we can use | operator in our python code to achieve clean code.

Here is the code where we have used map and filter for a specific operation.

In [1]: arr = [11, 12, 14, 15, 18]
In [2]: list(map(lambda x: x * 2, filter(lambda x: x%2 ==1, arr)))
Out[2]: [22, 30]

The same code with Pipes.

In [1]: from pipe import select, where
In [2]: arr = [11, 12, 14, 15, 18]
In [3]: list(arr | where (lambda x: x%2 ==1) | select(lambda x:x *2))
Out[3]: [22, 30]

Pipes passes the result of one function to another function, have inbuilt pipes method like select, where, tee, traverse.

Install Pipe

>> pip install pipe


Recursively unfold iterable:

In [12]: arr = [[1,2,3], [3,4,[56]]]
In [13]: list(arr | traverse)
Out[13]: [1, 2, 3, 3, 4, 56]


An alias for map().

In [1]: arr = [11, 12, 14, 15, 18]
In [2]: list(filter(lambda x: x%2 ==1, arr))
Out[2]: [11, 15]


Only yields the matching items of the given iterable:

In [1]: arr = [11, 12, 14, 15, 18]
In [2]: list(arr | where(lambda x: x % 2 == 0))
Out[2]: [12, 14, 18]


Like Python's built-in “sorted” primitive. Allows cmp (Python 2.x only), key, and reverse arguments. By default, sorts using the identity function as the key.

In [1]:  ''.join("python" | sort)
Out[1]:  'hnopty'


Like Python's built-in “reversed” primitive.

In [1]:  list([1, 2, 3] | reverse)
Out[1]:   [3, 2, 1]


Like Python's strip-method for str.

In [1]:  '  abc   ' | strip
Out[1]:  'abc'

That's all for today, In this blog you have seen how to install the Pipe and use the Pipe to write clean and short code using inbuilt pipes, you can check more over here


#100DaysToOffload #Python #DGPLUG


from sandeepk

My work required me to profile one of our Django applications, to help identify the point in the application which we can improve to reach our North Star. So, I thought it will be great to share my learning and tools, that I have used to get the job done.

What is Profiling?

Profiling is a measure of the time or memory consumption of the running program. This data further can be used for program optimization.

They are many tools/libraries out there which can be used to do the job. I have found these helpful.

Apache JMeter

It is open-source software, which is great to load and performance test web applications. It's easy to set up and can be configured for what we want in the result report.


Pyinstrument is a Python profiler that offers a Django middleware to record the profiling. The profiler generates profile data for every request. The PYINSTRUMENTPROFILEDIR contains a directory that stores the profile data, which is in HTML format. You can check how it works over here

Django Query Count

Django Query Count is a middleware that prints the number of database queries made during the request processing. There are two possible settings for this, which can be found here

Django Silk

Django Silk is middleware for intercepting Requests/Responses. We can profile a block of code or functions, either manually or dynamically. It also has a user interface for inspection and visualization

So, here are some of the tools which can be of great help in profiling your code and putting your effort in the right direction of optimization applications.


#100DaysToOffload #Python #DGPLUG


from sandeepk

Nowadays, every project has a docker image, to ease out the local setup and deployment process. We can attach these running container to our editor and can do changes on the go. We will be talking about the VS Code editor and how to use it to attach to the running container.

First thing first, run your container image and open up the VS Code editor, you will require to install following plugin

  1. Remote-Container
  2. Remote Development

Now, after installing these plugins, follow the steps below.

  • Press the F1 key to open Command Palette and choose Remote-Containers: Attach to Running Container... VS Code Palette
  • Choose your running container from the list, and voilà.
  • After this step, a new VS Code editor will open, which will be connected to your code. VS Code bar
  • You need to re-install plugin for this again form marketplace.

After this, you are all set to playground the code.


#100DaysToOffload #VSCode #Containers #Docker


from Pradhvan

PyLadies Berlin Logo

Source: https://www.meetup.com/PyLadies-Berlin/

PyLaides's Berlin chapter hosted a meetup on 20 November which seem very interesting to me when I first saw the agenda. It was on writing better talk proposals.

Though I have not attended many online meetups in the last two years I definitely didn't want to miss this one since I had a conference talk coming up early next year and this meetup just seemed like a really good opportunity to get some early feedback on the talk.

The meetup was sponsored by Ecosia.org and was hosted on GatherTown. The meetup agenda had only one talk and the rest was all an interactive session.

Once everyone settled in, Maria started his talk 'The Call for Proposals is open: Tips for writing a conference talk'. The talk resonated with me a lot because by the end Maria pointed out most folks don't submit a talk just because they think about not having mastery. I being a similar person who overthinks a lot of mastery could relate a lot.

Next was the Ideation round. It was the most fun and interactive part of the meetup where everyone pitched their talk idea(s) and after that, folks would give feedback (if any) on how the talk proposal can be improved.

I had initially joined the meetup to get some feedback on the talk that had already been selected for a conference. But later decided that I can utilize this chance to get some feedback on a completely new talk proposal.

So I presented a completely different talk idea 'Up and Flying with the Django Pony in 10 min' that I cooked up during the break.

Django Pony

Pony is Django's unofficial mascot. Source: https://djangopony.com/

I noticed Maria mention Python Pizza in her talk and saw they were accepting 10 min short talk. So that gave me chance to experiment and present something similar to Go in 100 Seconds.

Two other reasons that made me choose a talk on Django were

  • Most folks when starting out web development run behind learning the best or constantly switch without actually learning one well and knowing its limitations. But as I was rightly pointed out during the feedback, this could be a different talk altogether.

  • This talk could have been a really good starting point for someone who is thinking of picking up Django or moving to Django as it gives a good overview of its batteries included nature.

By the end of the Ideation round, there were only a couple of folks left. So it was decided that it really would help if everyone could write a documented proposal draft that everyone shared during the Ideation round and we could brainstorm on how can we improve it more.

The brainstorming session helped frame a better proposal and left me with a lot of valuable feedback out of which a few I could consider writing all the talk proposals in the future.

Jessica one of the PyLaides Berlin organizers did a wonderful job with the meetup keeping it interactive. While creating a safe space for everyone to share their views and feedback in a constructive manner.

I think other meetup groups can definitely take pages from PyLaides Berlin book in hosting a similar meetup where folks help out writing better talk proposals. As that would directly impact on quality of talk proposal which may also impact on the quality of talk conference gets. Who knows some of these proposals can end a being really good blogposts or meetup talks if not talks.

It might not be possible for organizers of an annual PyCon like PyCon India or other PyCon. But usually, a local chapter of Python User Groups can come up with a similar meetup in their region just after the CFP opens. Which can not only help out beginners but also help out folks improve on their talk proposal.

That's all for now. Until next time. Stay safe folks.


from mrinalraj

Most probably, you have faced/ heard about the recent Facebook outage. Do you know Cisco Thousand Eyes pinpointed the issue – “Facebook Internal DNS unreachability”

How are Cisco Thousand Eyes able to do it? To explain broadly, it all happens with Agents known as Vintage Points.

  1. External Vantage Points: In these agents, are distributed globally like a public agent.
  2. Internal Vantage Points: Agents are monitoring the enterprise contribute to internal Vantage Points.
  3. End-User Experience: These agents understand the employees and customer experience.

So all these agents cover almost all possible sectors hence are responsible for accurately identifying the issue.

Why Cisco ThousandEyes? To explain the need for Cisco ThousandEyes there is a need to understand the current problem. In the past company relied only on its enterprise Syslog and SNMP protocols to identify and troubleshoot the issues. In this way, the reporting of issues became reactive rather than proactively informing the customer. i.e. customer first used to report the outage then the service provider identifies and troubleshoots it. But this gap is now solved using Cisco ThousandEyes by proactively identifying and troubleshooting before the issue is notifiable to the customer.

Isn't it exciting? Yes, it is.

Here are a few articles related to Cisco ThousandEyes I find useful 1. Link to Issue Reported 2. Link to Complete Analysis of the issue

Next Coming: How Cisco ThousandEyes helping in revolutionizing maintaining high uptime.


from Pradhvan

Last weekend I attended EuroPython's virtual sprints. Though this was my second year of attending the virtual sprints I was still a bit overwhelmed. This year not only I would have been sprinting for other projects but I would be helping other folks contribute to ScanAPI. As I had been recently added to the core team, I thought it would be a good chance for me to dive deep into the codebase while helping others at the sprints.

Unlike last year where discord was the communication channel for the sprints, this year folks at EuroPython had a matrix server. Each sprint project had its channel integrated with Jitsi to help pair program or just have a hallway-like experience.

Similar to last year sprints were planned for the whole weekend with multiple opening/closing sessions to support different timezones. I liked that they kept this from last year because other folks from ScanAPI are from Brazil and to them, the first opening session which was at 9 AM CEST or 12:30 PM IST would be something like 4:30 in the morning. This gap between multiple openings also gave me some time to just visit other projects and contribute. At least that is what I thought initially and moved the sprints timings for ScanAPI to the second day.

On the first day of the sprints, I wasn't expecting that I had to be present at the opening since we had planned the sprints for the other day. But to my surprise got an invite for the opening session to present for ScanAPI as I was anyway online. So I mostly talked about what the project is, how to run the project locally and how you can pick up the issues in the opening.

After the opening since I did not see much activity in the ScanAPI channel right away, I hopped on to EuroPython's Website sprint channel where Raquel and Francesco(web lead for the project) was already present. I had a really fun time talking to them.

By the end of the day, ScanAPI had three new issues, 4 Pull Request that was to be reviewed, 2 got merged and one contribution to the wiki on how to set up the project on windows without WSL in it.

The second day was a bit slow in general where I had initially planned the sprints. Since we had very little activity on the second day, I did wrap up early after addressing the previous day's Pull Requests. Since there was only one person active on the channel post-lunch, the conversation at the end drifted to bread baking which to me seems a really fun thing to try in your free time.

In the end, it was a weekend well-spent.


from Pradhvan

Making sense of docstrings with Python's ast and tokenize module and using recursion to rewrite Python's os.walk()

Last week was among the busy week at work with a release. Sadly personal goals suffered.

I did manage to put some effort into issues that were pending. Let's walk through each of them one at a time:

  • Lynn Root, maintainer of interrogate did give a detailed description of the issue. The approach to reaching the solution makes much more sense to me now. Based on the response wrote a simple script to check # nodocqa on docstrings i.e do not get coverage of the particular class or function. Need to check now how to implement that on interrogate.

  • On reading about Recursion realized, recursive algorithms are widely used to explore and manage the file system. This reminded me about Python's os.walk() method. Did spend some time reading the implementation though the code is well documented I am still not 100% sure how it works. I guess will learn more when I finish the script that mimics os.walk() in some manner.

That's all for the last week. Until next week. Stay safe folks.



from abbisk


Variable is a way to refer to memory locations being accessed by a computer program. In languages like C, C++, Java, a variable is a name given to a memory location. So when we say:

int x;
float y;

This small piece of code asks the compiler to create memory space for the variables, 4 bytes each (the size depends on the compiler and programming language). The variables are basically names assigned to the memory location, if the variable is defined for a particular data type then only declared type data can be stored in it. But in python, we do not need to declare a variable beforehand like int x, we can straightaway move to define the variable and start using it like x = 10. alt text

Now the point is, how does Python know what is the type of the variable and how to access it? In Python, data types (integer, list, string, functions, etc. ) are objects and every object has a unique id that is assigned to it, at the time of object creation. Variables are references (pointers) to these objects. What happens when we say :

x = 10
  • Python executes the right-hand side of the code
  • Identifies 10 to be an integer
  • Creates an object of Integer class
  • Assigns x to refer to the integer object 10 alt text

Now if we create new variable y which is equal to x, what do you think happens internally?


y now refers to the same object as x. alt text

How do we know that? It can be verified by checking the id of the object that these two variables x and y are pointing to.

id(y) == id(x)
# output

We can also print the id of the object

print(f"id of the object pointed by x:{id(x)}")
print(f"id of the object pointed by y:{id(y)}") 

# output
#your id value might be different from mine but the two ids printed must be the same
id of the object pointed by x:140733799282752 
id of the object pointed by y:140733799282752

Now, what if we assign a different value to y?


This will create another integer object with the value 40 and y will now refer to 40 instead of 10. alt text How to check Again, we will use id() to see the ids of the objects.

print(x) # outputs 10
print(y) # outputs 40
print(f"id of the object pointed by x:{id(x)}")
print(f"id of the object pointed by y:{id(y)}")  

# output
id of the object pointed by x:140733799282752
id of the object pointed by y:140733799283712

As we can see from above, the two ids are now different which means a new object with value 40 is created and y refers to that object now.

If we wish to assign some other value to variable x

x = "python"

# output

We will get an object id that is different from the id “140733799282752” printed by id(x) for integer object 10 previously.

So what happened to the integer object 10? The integer object 10 now remains unreferenced and can no longer be accessed. Python has a smart garbage collection system that looks out for objects that are no longer accessible and reclaims the memory so that it can be used for some other purpose. alt text One last question: What if we create two integer objects with the same value?

p = 100
q = 100

Before I answer this question. Let's check their ids:

print(f"id of the object pointed by x:{id(p)}")
print(f"id of the object pointed by y:{id(q)}") 
# output
id of object pointed by x:140733799285632
id of object pointed by y:140733799285632

So what python does is optimize memory allocation by creating a single integer object with value 100 and then points both the variables p and q towards it.

Let's check one more example-

m = 350
n = 350

print(f"id of object pointed by x:{id(m)}")
print(f"id of object pointed by y:{id(n)}") 

# output
id of the object pointed by x:1783572881872
id of the object pointed by y:1783572881936

To our surprise the ids of the integer object 300 referred to by m and n are different. What is going on here?? Python allocates memory for integers in the range [-5, 256] at startup, i.e., integer objects for these values are created and ids are assigned. This process is also known as integer caching. So whenever an integer is referenced in this range, python variables point to the cached value of that object. That is why the object id for integer object 100 referenced by p and q print the same ids. For integers outside the range [-5, 256], their objects are created as and when they are defined during program implementation. This is the reason why the ids for integer object 350 referenced by m and n are different. As mentioned earlier, variables point to the objects stored in memory. Hence variables are free to point to object of any type.

x = 10 # x points to an object of 'int' type
x = ["python", 20, "apple"] # x now points to an object of type 'list'
x = "python" # x now points to a string type object

I hope you enjoyed reading it. Feel free to suggest improvements to the article or any other topic that you would like to know more about.


from mrinalraj

According to wiki “ Cloud computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user”

Benefits it has brought

With the benefits of Iaas(Infrastructure as a Service) , Paas(Product as a Service) we can now use the service on the go without any headache of maintenance of the facility.

The Difference



from mrinalraj

Today I am very much excited to share with you my first song video on the Dgplug platform via YouTube 'Zara Zara'.

Do you know what gives more satisfaction in life? Is it academics or getting an engineering degree? ;D Mostly it is Nurturing your skills and finding a platform to present yourself.

For me publishing my song gives me more satisfaction. In the end, when the time will come to retire it will not be the CGPAs but the risky path in chasing your passion and the crazy things we did with friends that will keep us smiling :)


from mrinalraj

Yesterday on 1st October was the Foundation Day of The Scriptink. I always imagined how easy it seems to maintain the project made apart from the idea of any normal college projects. The audience always sees only the result and not the path. Coming to the point, Me being in Scriptink I came to know how hard is it to keep maintaining the consistency. U may be wondering, consistency in what way.

Definitely, You would be following our Scriptink through the app, youtube LinkedIn page. More than 2 years of consistency in posting the monthly short videos is a great achievement in itself. This is what the audience sees. But inside a team is working day and night to make this 2 min short video possible.

Not much to say, but Scriptink is all about us and not about I.

Click on this to experience the 2 years journey with us