[{"content":"Notice: For best experience, you can duplicate my Deepnote notebook here and run the supporting code yourself.\nAt Traction Tools we\u0026rsquo;re highly commmited to make our clients succeed. We run a platform for EOS, which is a system that facilitates entreprenuers to run their business, internal operations, and effective meetings on the cloud.\nHowever, as a SaaS company, it\u0026rsquo;s very common to deal with issues like churn and customer retention. Here we\u0026rsquo;re going to discuss how we analyze churn and what are some of the important factors that makes our customer stay or cancel their subscription.\nIt is very common for companies to try to predict customer churn using the so-called black-box models which are highly complex algorithms that can detect if a client is going to cancel their subscription based on a number of factors.\nThis is not necessarily bad, but there are better ways to predict tenure and calculate the probabilities of a user churning while using interpretable models which helps us understand what is causing our users to cancel their subscription.\nThis short article is aimed at data scientist and business analysts that would like to have a better understanding on how to calculate a churn probability for a client, causes, and the overall churn ratio.\nIntroduction Because we do software for EOS, and we offer our platform to users that would love to have effective meetings. Following our business model we have teams which include an n number of users, and an n number of meetings run every week per team.\nThis allows us to have a sample dataset that includes:\n Weekly Average Meetings: How many meetings the user runs per week. Active User Count: How many users are within a team. Has Churned: Marks the observable event of \u0026ldquo;death\u0026rdquo; (i.e: cancellation.) Cluster Labels: A categorical variable that tell us if the account has high or low activity in the platform. Tenure: How many days was the account active in our platform  In this article we will work only with a sample containing synthetic data and limited features to maintain sensitive information private.\nKey Objectives Key objectives from this analysis are:\n Performing a basic and short EDA (Exploratory Data Analysis) to get insights Getting the median lifetime of our customers Validating if the median lifetime varies per account activity  To run this analysis we\u0026rsquo;ll use a Python environment with libraries such as Pandas, Matplotlib, and Lifelines. Without further ado, let\u0026rsquo;s start jumping right into the exploratory data analysis (EDA).\nExploratory Data Analysis Let\u0026rsquo;s start by importing some libraries that we\u0026rsquo;re going to use and also our dataframe to inspect it.\nimport pandas as pd import matplotlib.pyplot as plt # Stablish chart style and figure size plt.style.use(\u0026#39;ggplot\u0026#39;) plt.rcParams[\u0026#39;figure.figsize\u0026#39;] = 14, 7 # Load our dataframe and visualize a random sample of the dataframe accounts = pd.read_csv(\u0026#39;clustered_users.csv\u0026#39;) accounts.sample(10)     weekly_avg_meetings active_user_count has_churned cluster_labels tenure     424 3 12 1 low_activity 798   691 5 10 0 low_activity 800   2833 1 5 0 low_activity 117   958 6 70 0 high_activity 673   277 9 63 0 high_activity 1083   1035 1 5 1 low_activity 365   399 3 17 0 low_activity 992   3050 1 4 0 low_activity 42   597 11 44 0 high_activity 860   1174 4 9 0 low_activity 593    The first thing I\u0026rsquo;m noticing in this sample is that there is a low number of high activity accounts. Let\u0026rsquo;s clarify our assumption by running a .value_counts() method on our dataframe.\n# Let\u0026#39;s pass the normalize argument to give us percentages accounts.cluster_labels.value_counts(normalize=True) # Output low_activity 0.829308 high_activity 0.170692 Name: cluster_labels, dtype: float64 As expected, 83% of our sample is composed by low activity accounts, while 17% of it is taken by high activity accounts.\nIt would be a good idea to visualize this number in a horizontal bar chart, fortunately this can be easily done using the pandas method .plot().\naccounts.cluster_labels \\ .value_counts(normalize=True) \\ .plot(kind=\u0026#39;barh\u0026#39;) # Always label your axes plt.title(\u0026#39;Population Activity Distribution\u0026#39;) plt.ylabel(\u0026#39;Cluster Label\u0026#39;) plt.xlabel(\u0026#39;Percentage\u0026#39;); Having a visual representation helps us to identify an issue here, if we run a survival analysis we might have to divide these two groups to better understand their behaviors and lifetime in our platform.\nNow let\u0026rsquo;s consider the tenure column, which is the one that will tell us how much time does a client stays with us. We can run the .describe() method to get some basic statistics on about this feature.\nThe first thing I want to do before we run the .describe() method is transforming the column to months instead of days.\naccounts[\u0026#39;tenure\u0026#39;] = accounts.tenure / 30.4167 accounts.tenure.describe() # Output count 3064.000000 mean 17.252132 std 11.720486 min 0.098630 25% 8.284922 50% 14.564368 75% 23.876686 max 59.342401 Name: tenure, dtype: float64 Now, we have 3,063 observations here, and we can notice that the mean tenure is 17 months, with a lower bound of 8 months and an upper bound of 23 months.\nHowever, this is not the appropriate way of measuring churn because we cannot say that every client stay with us from 8 to 23 months as not everyone has the same experience, furthermore, we already saw that we have different group of accounts and this information might vary wildly.\nNow, let\u0026rsquo;s compare the behavior of accounts that have cancelled against accounts that are still active in this sample and try to get some insights.\nimport numpy as np # group by churn and activity and use # median and mean aggregations accounts.drop(\u0026#39;tenure\u0026#39;, axis=1) \\ .groupby([\u0026#39;has_churned\u0026#39;, \u0026#39;cluster_labels\u0026#39;]) \\ .agg([np.mean, np.median]) # Output weekly_avg_meetings active_user_count mean median mean median has_churned cluster_labels 0 high_activity 10.711111 10.0 39.337374 37.0 low_activity 2.486349 2.0 11.398427 10.0 1 high_activity 10.285714 7.5 39.642857 37.5 low_activity 1.900000 1.0 7.957895 7.0 What we have here is a multi-index describing the mean and median values for our features, broken down by account activity for active and cancelled accounts.\nOk, that was a mouthful, but let\u0026rsquo;s focus on meetings first:\n When it comes to having weekly meetings, active and cancelled accounts with high activity have the same number of meetings on average a week, but a difference of -2.5 when we calculate it using the median. For active accounts with low activity, the mean and the median don\u0026rsquo;t differ wildly. Two meetings a week is reasonable, however, we can notice that for cancelled accounts the number of weekly meetings changes to 1 instead of 2 meetings a week.  Now, what can we conclude about the active user count in each team?\n When it comes to high activity teams the mean and median value for active and cancelled accounts don\u0026rsquo;t differ much. On the other hand, for low_activity accounts that are active we observe that they usually have about 10 users in their team, however, for cancelled accounts we can observe that they usually have 7 users on their subscription, which is a different behavior that requires further analysis.  EDA Key Insights With this information we can already conclude that keeping our users busy in the platform is paramount to retain them, and because Traction Tools is a collaborative space, having more users in their team increases engagement.\nWe can already start developing retention strategies to succeed with our customers. Based on this information we can also build machine learning algorithms to detect churn, anomalies, and clients that will provide more value over time.\nLet\u0026rsquo;s go a bit further and try to estimates probabilities around the insights we have discovered.\nUsing Lifelines for Survival Analysis There\u0026rsquo;s a great library out there for properly doing survival analysis created by Cameron Davidson-Pilon called Lifelines.\nOne of the best libraries for survival analysis that I\u0026rsquo;ve tried so far. Let\u0026rsquo;s use this to analyze the chance of survival at any time of our clients subscription.\nGlobal Survivability Rates We\u0026rsquo;ll start using the Kaplan-Meier fitter to analyze the the survivability rates for the whole population.\nfrom lifelines import KaplanMeierFitter # Filter observed events only churn_filter = (accounts.has_churned == 1) cancelled_accounts = accounts[churn_filter] # Feed the model with our dataset for churned accounts kmf = KaplanMeierFitter() kmf.fit(cancelled_accounts.tenure, label=\u0026#39;Churned Customers\u0026#39;) # Plot the survival chance of our population fig, ax = plt.subplots() kmf.plot(ax=ax, at_risk_counts=True) ax.set_title(\u0026#39;Kaplan-Meier Survival Curve — Churned Customers\u0026#39;) ax.set_xlabel(\u0026#39;Customer Tenure (in Months)\u0026#39;) ax.set_ylabel(\u0026#39;Customer Survival Probability (%)\u0026#39;) plt.show(); Now we can see the total survival chance of our population at any point in time. In the example above we can observe that there\u0026rsquo;s initially a 100% chance of survival and it slowly declines as the time goes by.\nOf 408 observations, we can see that in the 10th month 232 of them still have an active subscription, but 176 of them have already cancelled.\nNow let\u0026rsquo;s try to get the median survival time and also the survival chance at this point in time.\nmedian_surv_time = kmf.median_survival_time_ surv_chance = kmf.cumulative_density_at_times(median_surv_time).iloc[0] print(f\u0026#39;The median survival time is: {median_surv_time:0.2f}months\u0026#39;) print(f\u0026#39;With a survivability of: {surv_chance:0.2f}%\u0026#39;) # Output The median survival time is: 11.74 months With a survivability of: 0.50% It seems that after the 11th month our clients have a 50/50 chance of cancelling their subscription. Let\u0026rsquo;s try now getting a lower bound and upper bound to make sure we have a confidence interval instead of only the median value.\nfrom lifelines.utils import median_survival_times median_ci = median_survival_times(kmf.confidence_interval_) lower_bound, upper_bound = median_ci.loc[0.5] kmf.median_survival_time_ print(f\u0026#39;Survival Lower Bound is: {lower_bound:0.2f}months\u0026#39;) print(f\u0026#39;Survival Upper Bound is: {upper_bound:0.2f}months\u0026#39;) # Output Survival Lower Bound is: 10.42 months Survival Upper Bound is: 12.95 months Now we know that we should take care of accounts that are between 10 to 13 months old. Using this information we can trigger actions to take care of these customers in order to improve their lifespan in the platform.\nSegmented Survivability Rates However, there\u0026rsquo;s one thing that we have to notice, these are values for the entire population, but we know that we have different types of clients in our sample, and we should separate these two populations and observe their behavior.\nTo achieve this we\u0026rsquo;ll separate our populations using the clustered_labels column which separates the accounts by activity.\nfrom lifelines.plotting import add_at_risk_counts low_ = (cancelled_accounts.cluster_labels == \u0026#39;low_activity\u0026#39;) high_ = (cancelled_accounts.cluster_labels == \u0026#39;high_activity\u0026#39;) fig, ax = plt.subplots() low_kmf = KaplanMeierFitter() low_kmf.fit(cancelled_accounts.tenure[low_], cancelled_accounts.has_churned[low_], label=\u0026#39;Low Activity Accounts\u0026#39;) low_kmf.plot(ax=ax) high_kmf = KaplanMeierFitter() high_kmf.fit(cancelled_accounts.tenure[high_], cancelled_accounts.has_churned[high_], label=\u0026#39;High Activity Accounts\u0026#39;) high_kmf.plot(ax=ax) add_at_risk_counts(low_kmf, high_kmf); We can already observe that there\u0026rsquo;s a BIG difference in survivability between accounts with low activity and accounts with high activity. Let\u0026rsquo;s now get the lower and upper bounds for these types of accounts.\nlow_median_ci = median_survival_times(low_kmf.confidence_interval_) lowact_lower_bound, lowact_upper_bound = low_median_ci.loc[0.5] high_median_ci = median_survival_times(high_kmf.confidence_interval_) highact_lower_bound, highact_upper_bound = high_median_ci.loc[0.5] print(\u0026#39;Low Activity Accounts:\u0026#39;) print(f\u0026#39;\\t- Survival Lower Bound is: {lowact_lower_bound:0.2f}months\u0026#39;) print(f\u0026#39;\\t- Survival Upper Bound is: {lowact_upper_bound:0.2f}months\u0026#39;) print(\u0026#39;High Activity Accounts:\u0026#39;) print(f\u0026#39;\\t- Survival Lower Bound is: {highact_lower_bound:0.2f}months\u0026#39;) print(f\u0026#39;\\t- Survival Upper Bound is: {highact_upper_bound:0.2f}months\u0026#39;) # Output Low Activity Accounts: - Survival Lower Bound is: 9.83 months - Survival Upper Bound is: 12.26 months High Activity Accounts: - Survival Lower Bound is: 17.88 months - Survival Upper Bound is: 31.30 months This is great! We now know that high activity accounts have more chances of staying with us for a long time than low activity accounts. While low activity accounts can be retained between 9 to 12 months, high activity accounts can stay with us between 17 to 31 months.\nFrom a business development perspective, this is useful information that we can use to help our customers move from a low activity account to a high one to create a more engaging space for them.\nUnderstanding the Impact of Covariates To finalize this short study, I\u0026rsquo;d like to understand what would be the impact of our variables like Weekly Average Meetings and Active User Accounts in a team. This will help us to answer questions like:\n Having more users in a team space improves the chances of survival? Does having more meetings also affect the chances of survival?  What we\u0026rsquo;re trying to find out with these questions is if increasing the activity on accounts can change the probability of an account leaving the service early on their subscription.\nTo begin with we\u0026rsquo;ll use the Cox Proportional Hazard model to understand how these variables affect the survival chance of a customer. We can directly import the model from Lifelines and then fitting the model with our dataset.\nfrom lifelines import CoxPHFitter import numpy as np no_clusters_accounts = accounts.drop([\u0026#39;cluster_labels\u0026#39;], axis=1) cph = CoxPHFitter() cph.fit(no_clusters_accounts, duration_col=\u0026#39;tenure\u0026#39;, event_col=\u0026#39;has_churned\u0026#39;) # Output \u0026lt;lifelines.CoxPHFitter: fitted with 3064 total observations, 2656 right-censored observations\u0026gt; Once we fitted our model with our dataset we can see how our variables affect the churn probability for different groups. In the example bellow we can see how the survival probability changes for accounts with 5, 15, 25, 35, and 45 users.\ncph.plot_partial_effects_on_outcome(\u0026#39;active_user_count\u0026#39;, np.arange(5, 50, 10), cmap=\u0026#39;coolwarm_r\u0026#39;); It\u0026rsquo;s clear that users with sizable teams are more likely to stick around. Because Traction Tools is a collaborative space then it makes sense that having more people in organization accounts improves retention.\nNow, I want to see if having a fair amount of meetings within a week improves retention.\ncph.plot_partial_effects_on_outcome(\u0026#39;weekly_avg_meetings\u0026#39;, np.arange(0, 8, 2), cmap=\u0026#39;coolwarm_r\u0026#39;); As expected, running a fair amount of meetings improves retention. This is one of the reasons why high activity accounts are more likely to stick around than low activity users.\nConclusion Trying to manage customer churn is no easy task, however, we were able to uncover a good amount of insights that allow us to drive strategies and make informed decisions based on data. This insights allow us to understand our users when it comes to churning, build alert systems and campaigns based on AI, and provide training to our customers to make collaboration happen.\nThis is how we use data at Traction Tools to make important decisions, democratize information, and provide value to our customers.\nAlso, a huge thank you to the team at Deepnote for enabling these tools to help us adopt and scale information as a second language throught our company, can\u0026rsquo;t thank them enough!\n","permalink":"https://codingdose.info/posts/survival-analysis/","summary":"Notice: For best experience, you can duplicate my Deepnote notebook here and run the supporting code yourself.\nAt Traction Tools we\u0026rsquo;re highly commmited to make our clients succeed. We run a platform for EOS, which is a system that facilitates entreprenuers to run their business, internal operations, and effective meetings on the cloud.\nHowever, as a SaaS company, it\u0026rsquo;s very common to deal with issues like churn and customer retention. Here we\u0026rsquo;re going to discuss how we analyze churn and what are some of the important factors that makes our customer stay or cancel their subscription.","title":"Survival Analysis: Analyzing Churn and Improving Customer Retention as a SaaS Company"},{"content":"Use case Sometimes you just want to capture the first (or last) event of something. Let\u0026rsquo;s say, you have a list of clients and want to capture their first purchase. This is useful if you want a list of new paying customers.\nDataset We\u0026rsquo;re thinking about customers here, so let\u0026rsquo;s get the Online Retail Dataset from the UCI Machine Learning Repository. We can download this dataset directly using Pandas.\n\u0026gt;\u0026gt;\u0026gt; import pandas as pd \u0026gt;\u0026gt;\u0026gt; customers = pd.read_excel(\u0026#39;https://archive.ics.uci.edu/ml/machine-learning-databases/00352/Online%20Retail.xlsx\u0026#39;)     InvoiceNo StockCode Description Quantity InvoiceDate UnitPrice CustomerID Country     0 536365 85123A WHITE HANGING HEART T-LIGHT HOLDER 6 2010-12-01 08:26:00 2.55 17850 United Kingdom   1 536365 71053 WHITE METAL LANTERN 6 2010-12-01 08:26:00 3.39 17850 United Kingdom   2 536365 84406B CREAM CUPID HEARTS COAT HANGER 8 2010-12-01 08:26:00 2.75 17850 United Kingdom   3 536365 84029G KNITTED UNION FLAG HOT WATER BOTTLE 6 2010-12-01 08:26:00 3.39 17850 United Kingdom   4 536365 84029E RED WOOLLY HOTTIE WHITE HEART. 6 2010-12-01 08:26:00 3.39 17850 United Kingdom   5 536365 22752 SET 7 BABUSHKA NESTING BOXES 2 2010-12-01 08:26:00 7.65 17850 United Kingdom   6 536365 21730 GLASS STAR FROSTED T-LIGHT HOLDER 6 2010-12-01 08:26:00 4.25 17850 United Kingdom   7 536366 22633 HAND WARMER UNION JACK 6 2010-12-01 08:28:00 1.85 17850 United Kingdom   8 536366 22632 HAND WARMER RED POLKA DOT 6 2010-12-01 08:28:00 1.85 17850 United Kingdom   9 536367 84879 ASSORTED COLOUR BIRD ORNAMENT 32 2010-12-01 08:34:00 1.69 13047 United Kingdom    Methodology If you have a good eye you\u0026rsquo;ll notice that it\u0026rsquo;s one invoice for multiple products that were purchased by a customer, this is why the values in InvoiceNo are duplicated. There\u0026rsquo;s a couple of things we can accomplish here:\n Get the first row for every CustomerID Get the first invoice for every CustomerID Get the total sum for a customer\u0026rsquo;s first time purchase Get the first purchase of the day  Get the first row for every CustomerID Here\u0026rsquo;s a neat trick using the methods groupby and head to get the first row of a group:\ncustomers.groupby(\u0026#39;CustomerID\u0026#39;).head(1)    InvoiceNo StockCode Description Quantity InvoiceDate UnitPrice CustomerID Country     536365 85123A WHITE HANGING HEART T-LIGHT HOLDER 6 2010-12-01 08:26:00 2.55 17850.0 United Kingdom   536367 84879 ASSORTED COLOUR BIRD ORNAMENT 32 2010-12-01 08:34:00 1.69 13047.0 United Kingdom   536370 22728 ALARM CLOCK BAKELIKE PINK 24 2010-12-01 08:45:00 3.75 12583.0 France   536371 22086 PAPER CHAIN KIT 50\u0026rsquo;S CHRISTMAS 80 2010-12-01 09:00:00 2.55 13748.0 United Kingdom   536374 21258 VICTORIAN SEWING BOX LARGE 32 2010-12-01 09:09:00 10.95 15100.0 United Kingdom   536376 22114 HOT WATER BOTTLE TEA AND SYMPATHY 48 2010-12-01 09:32:00 3.45 15291.0 United Kingdom   536378 22386 JUMBO BAG PINK POLKADOT 10 2010-12-01 09:37:00 1.95 14688.0 United Kingdom   536380 22961 JAM MAKING SET PRINTED 24 2010-12-01 09:41:00 1.45 17809.0 United Kingdom   536381 22139 RETROSPOT TEA SET CERAMIC 11 PC 23 2010-12-01 09:41:00 4.25 15311.0 United Kingdom   C536379 D Discount -1 2010-12-01 09:41:00 27.5 14527.0 United Kingdom    You\u0026rsquo;ll notice that the values in CustomerID are unique and only the first row is presented.\nGet the first invoice for every CustomerID It\u0026rsquo;s great to have the first row, however, you\u0026rsquo;ll notice that in our dataset the values in InvoiceNo are repeated, this is because many customers buy multiple things in one transaction. So, it doesn\u0026rsquo;t make sense in a real scenario to filter only the first row.\nInstead, we want to keep every item from the first invoice only. We can do that by making a list of the first transaction by customer and then applying a mask to our dataset:\n# Create a list of the first invoice by client first_invoices = customers.groupby([\u0026#39;CustomerID\u0026#39;]).InvoiceNo.first().to_list() # Filter first invoices by client customers[customers.InvoiceNo.isin(first_invoices)]    InvoiceNo StockCode Description Quantity InvoiceDate UnitPrice CustomerID Country     536365 85123A WHITE HANGING HEART T-LIGHT HOLDER 6 2010-12-01 08:26:00 2.55 17850.0 United Kingdom   536365 71053 WHITE METAL LANTERN 6 2010-12-01 08:26:00 3.39 17850.0 United Kingdom   536365 84406B CREAM CUPID HEARTS COAT HANGER 8 2010-12-01 08:26:00 2.75 17850.0 United Kingdom   536365 84029G KNITTED UNION FLAG HOT WATER BOTTLE 6 2010-12-01 08:26:00 3.39 17850.0 United Kingdom   536365 84029E RED WOOLLY HOTTIE WHITE HEART. 6 2010-12-01 08:26:00 3.39 17850.0 United Kingdom   536365 22752 SET 7 BABUSHKA NESTING BOXES 2 2010-12-01 08:26:00 7.65 17850.0 United Kingdom   536365 21730 GLASS STAR FROSTED T-LIGHT HOLDER 6 2010-12-01 08:26:00 4.25 17850.0 United Kingdom   536367 84879 ASSORTED COLOUR BIRD ORNAMENT 32 2010-12-01 08:34:00 1.69 13047.0 United Kingdom   536367 22745 POPPY\u0026rsquo;S PLAYHOUSE BEDROOM 6 2010-12-01 08:34:00 2.1 13047.0 United Kingdom   536367 22748 POPPY\u0026rsquo;S PLAYHOUSE KITCHEN 6 2010-12-01 08:34:00 2.1 13047.0 United Kingdom   536367 22749 FELTCRAFT PRINCESS CHARLOTTE DOLL 8 2010-12-01 08:34:00 3.75 13047.0 United Kingdom   536367 22310 IVORY KNITTED MUG COSY 6 2010-12-01 08:34:00 1.65 13047.0 United Kingdom   536367 84969 BOX OF 6 ASSORTED COLOUR TEASPOONS 6 2010-12-01 08:34:00 4.25 13047.0 United Kingdom   536367 22623 BOX OF VINTAGE JIGSAW BLOCKS 3 2010-12-01 08:34:00 4.95 13047.0 United Kingdom   536367 22622 BOX OF VINTAGE ALPHABET BLOCKS 2 2010-12-01 08:34:00 9.95 13047.0 United Kingdom   536367 21754 HOME BUILDING BLOCK WORD 3 2010-12-01 08:34:00 5.95 13047.0 United Kingdom   536367 21755 LOVE BUILDING BLOCK WORD 3 2010-12-01 08:34:00 5.95 13047.0 United Kingdom   536367 21777 RECIPE BOX WITH METAL HEART 4 2010-12-01 08:34:00 7.95 13047.0 United Kingdom   536367 48187 DOORMAT NEW ENGLAND 4 2010-12-01 08:34:00 7.95 13047.0 United Kingdom   536370 22728 ALARM CLOCK BAKELIKE PINK 24 2010-12-01 08:45:00 3.75 12583.0 France    Get the total sum for a customer\u0026rsquo;s first time purchase What we want to do in this context is to summarize the customer\u0026rsquo;s first purchase by multiplying the item Quantity times the UnitPrice and storing the operation result in Total.\nBecause we\u0026rsquo;re only interested in the first purchase, we\u0026rsquo;ll continue to use the dataframe first_purchase that we just created.\n# Filter first invoices by client first_purchase = customers[customers.InvoiceNo.isin(first_invoices)].copy() first_purchase[\u0026#39;Total\u0026#39;] = first_purchase.Quantity * first_purchase.UnitPrice first_purchase[[\u0026#39;CustomerID\u0026#39;, \u0026#39;Description\u0026#39;, \u0026#39;Quantity\u0026#39;, \u0026#39;UnitPrice\u0026#39;, \u0026#39;Total\u0026#39;]]    CustomerID Description Quantity UnitPrice Total     17850.0 WHITE HANGING HEART T-LIGHT HOLDER 6 2.55 15.299999999999999   17850.0 WHITE METAL LANTERN 6 3.39 20.34   17850.0 CREAM CUPID HEARTS COAT HANGER 8 2.75 22.0   17850.0 KNITTED UNION FLAG HOT WATER BOTTLE 6 3.39 20.34   17850.0 RED WOOLLY HOTTIE WHITE HEART. 6 3.39 20.34   17850.0 SET 7 BABUSHKA NESTING BOXES 2 7.65 15.3   17850.0 GLASS STAR FROSTED T-LIGHT HOLDER 6 4.25 25.5   13047.0 ASSORTED COLOUR BIRD ORNAMENT 32 1.69 54.08   13047.0 POPPY\u0026rsquo;S PLAYHOUSE BEDROOM 6 2.1 12.600000000000001   13047.0 POPPY\u0026rsquo;S PLAYHOUSE KITCHEN 6 2.1 12.600000000000001    Now we can see how much users spent on their first purchase.\nfirst_purchase.groupby([\u0026#39;CustomerID\u0026#39;]).Total.sum()    CustomerID Total     12346.0 77183.6   12347.0 711.79   12348.0 892.8000000000001   12349.0 1757.55   12350.0 334.40000000000003   12352.0 296.49999999999994   12353.0 89.0   12354.0 1079.4   12355.0 459.4   12356.0 2271.6200000000003    Get the first purchase of the day To achieve this we will do the following:\n Sort values by date: Just in case our dataset is not sorted by time. Extract the date from InvoiceNo: We will want to remove the hour from the timestamp. Keep only one row per day: We will remove duplicates from the date column and keep the first one only. Drop the date column: We won\u0026rsquo;t need it anymore, so we\u0026rsquo;ll drop it.  Here\u0026rsquo;s how it works:\ncustomers.sort_values(\u0026#39;InvoiceDate\u0026#39;, inplace=True) customers[\u0026#39;date\u0026#39;] = customers.InvoiceDate.dt.date customers.drop_duplicates(\u0026#39;date\u0026#39;, keep=\u0026#39;first\u0026#39;, inplace=True) customers.drop(\u0026#39;date\u0026#39;, axis=1, inplace=True) customers    InvoiceNo StockCode Description Quantity InvoiceDate UnitPrice CustomerID Country     536365 85123A WHITE HANGING HEART T-LIGHT HOLDER 6 2010-12-01 08:26:00 2.55 17850.0 United Kingdom   536598 21421 PORCELAIN ROSE LARGE 12 2010-12-02 07:48:00 1.25 13090.0 United Kingdom   536847 22067 CHOC TRUFFLE GOLD TRINKET POT 24 2010-12-03 09:31:00 1.65 17135.0 United Kingdom   537037 22988 SOLDIERS EGG CUP 12 2010-12-05 10:03:00 1.25 17243.0 United Kingdom   537226 22389 PAPERWEIGHT SAVE THE PLANET 6 2010-12-06 08:34:00 2.55 15987.0 United Kingdom   C537444 22580 ADVENT CALENDAR GINGHAM SACK -8 2010-12-07 08:42:00 5.95 14850.0 United Kingdom   537667 22158 3 HEARTS HANGING DECORATION RUSTIC 128 2010-12-08 08:12:00 2.55 17870.0 United Kingdom   537879 22694 WICKER STAR 6 2010-12-09 08:34:00 2.1 14243.0 United Kingdom   538172 84212 ASSORTED FLOWER COLOUR \u0026ldquo;LEIS\u0026rdquo; 24 2010-12-10 09:33:00 0.65 15805.0 United Kingdom   538365 22932 BAKING MOULD TOFFEE CUP CHOCOLATE 8 2010-12-12 10:11:00 2.55 17243.0 United Kingdom    Notice how there\u0026rsquo;s only one row per day.\nConclusion That\u0026rsquo;s it, you know now some quick methods to get the first event of something. Make sure to follow me in Twitter if you haven\u0026rsquo;t and want to be updated of my future posts: Follow @__franccesco ","permalink":"https://codingdose.info/posts/get-first-row-group-pandas/","summary":"Use case Sometimes you just want to capture the first (or last) event of something. Let\u0026rsquo;s say, you have a list of clients and want to capture their first purchase. This is useful if you want a list of new paying customers.\nDataset We\u0026rsquo;re thinking about customers here, so let\u0026rsquo;s get the Online Retail Dataset from the UCI Machine Learning Repository. We can download this dataset directly using Pandas.\n\u0026gt;\u0026gt;\u0026gt; import pandas as pd \u0026gt;\u0026gt;\u0026gt; customers = pd.","title":"Multiple Ways to Get the First Row for Each Group in Pandas"},{"content":"The wrong path to data science Let me give you some context first. A few years ago, I was on the road to data science. I wanted to learn everything about this field, the sole idea of building something intelligent that can help someone predict something amazed me.\nInspired by this idea, I decided I wanted to become a data scientist, and like many others, I jumped from engineering to this new landscape. Not knowing where to start, I began searching through the Internet to see how data science looks.\nSoon enough, I ended up in the vast sea of blogs, poured with hype and expectations; I was reading titles such as:\n Data Science for Beginners: FULL COURSE! Must-read books for Machine Learning and Data Science How to speed up pandas with one line of code! 10 BEST machine learning courses  I was ready to read them all. I thought to myself:\n If I can learn the best algorithm, if I build the best model to predict X thing, if I apply bleeding-edge techniques, I will surely stay ahead of the competition, I will be a good data scientist.\n I was about to learn that skill only gets you so far.\nHard truths and disappointments After months of hard study, I was quickly looking for a job in this new and exciting field. I was pretty good at Pandas, and I was able to get my head around scikit-learn. Tensorflow? No problem.\nI got a job as a data analyst. I was hyped, I felt fantastic, and I wanted to show what I could do. I tried to help the company grow and show them how to apply data analysis and machine learning to their operations.\nBut in my small mind, I had no idea how utterly wrong I was, and let me tell you why.\n1. Data doesn\u0026rsquo;t appear magically Photo by @art_maltsev on Unsplash, accessed 02/11/2020   Guess what? Every blog out there will give you the assumption that there\u0026rsquo;s a dataset clean and ready to be analyzed. I fell into this assumption as well.\nAs a data analyst, I was tasked to analyze our sales, monthly revenue, cancellations, and everything that is, without a doubt, essential for a SaaS company, and to get a dataset, I had to connect to production servers, APIs, buckets, etc.\nAnd you could say: Well, of course, that\u0026rsquo;s expected! The only problem is that programs do not generate datasets for human consumption.\nMost of the time, you will have a SQL table in a production database ridden with many columns that you don\u0026rsquo;t even understand what they mean. Or JSON files that don\u0026rsquo;t even have a proper structure. Or incomplete datasets that I needed to join from multiple sources to have a working dataset.\nNow, imagine doing that over and over and over. The first lesson I learned was that data doesn\u0026rsquo;t magically appear: Something has to generate it, and someone has to put it together.\n2. Scalability will be an issue Photo by @maxon on Unsplash, accessed 02/11/2020   I struggled to get data, but I finally knew my way around it. I was already building and putting together datasets, and it was about time to fire up Jupyter and start tinkering with it.\nMy objective was straight, I wanted to know the reasons people cancel their subscription with us. I started my EDA right away. Found some hard truths, cleared up assumptions, and built a small model to predict someone\u0026rsquo;s probability of churning.\nIt wasn\u0026rsquo;t the best model, but it was good enough, and I was proud of it. I presented my findings to the stakeholders, and they were delighted. They now expect a report of the accounts that will most likely cancel every month in their email. The experiment was a success!\nDear reader, did you just realized what I just said? If you haven\u0026rsquo;t managed a data science team before, you will think there\u0026rsquo;s no issue here. However, you will soon realize that this strategy is not scalable for someone who has to deal with such a team\u0026rsquo;s coordination, capacity, and planning.\nYou\u0026rsquo;re having a data analyst (or a data scientist) extracting data by himself, running a report locally, on a Jupyter notebook, which only works on his computer, manually delivering excel reports to the stakeholders. If you don\u0026rsquo;t think this is a recipe for disaster, I invite you to reconsider your strategy to build scalable teams.\nMoreover, with this approach, you\u0026rsquo;re going to burn out, stakeholders will depend on your ability to send them the report on time, and you\u0026rsquo;re teaching the company that they don\u0026rsquo;t need to learn about data. They have you.\nI soon learned my second lesson the hard way: Data Science is nothing without the architecture to support it.\n3. Models with no business case are useless Photo by @kellysikkema on Unsplash, accessed 02/11/2020   Even though our processes were not scalable, we kept going, and we were developing model after model, even the same models with different algorithms.\nWe just discovered AI, and we wanted to make it ours! After all those models I built, I realized that people asked me for things that seemed to tackle no business problem. Predict revenue? Ok. Cancellations? Here you go. Forecast new accounts? No problem.\nNow, let me ask some difficult questions to my past self:\n Why someone wanted me to predict the revenue? Was there a plan to execute if our prediction was that we were about to hit a bad month? We have a model to predict churn. Do we have an action plan for users at risk? Why do you want to forecast new accounts? Is there any reason to do it at all?  If you layout a myriad of models out there without any business objective, with no execution plan, to only please stakeholders' wonder and amazement, then let me politely tell you that you\u0026rsquo;re providing nothing of value.\nHaving a business objective and an execution plan is paramount to building successful AI products that will change the way you do business. Let\u0026rsquo;s go even further, your responsibility as a data scientist is to educate your stakeholders!\nThey trust your expertise and field knowledge to guide them through this AI revolution. Have them prepare a business case, ask them difficult questions and execution plans, ready to mitigate failure, explicit assumptions around the business risks involved.\nYou\u0026rsquo;ll provide a more clear agenda, better models, and you\u0026rsquo;ll bring more value to the company. Yet again, I learned another lesson: To build without a plan is to build nothing at all.\n4. Data Science is not exclusive for a team Photo by @lukechesser on Unsplash, accessed 02/11/2020   We are at the Fourth Industrial Revolution, and data is at the front-line. That is something that I bet most people don\u0026rsquo;t understand. Data has such a breakthrough that it transformed businesses in its entirety!\nRemember how important it was to know how to use a computer? I still remember that having a Microsoft Office learning certificate guarantees you a job somewhere. People were replaced one-by-one by the newer generations who were more adept at computers.\nYears later, that was not a qualification requirement. It is an expectation. Companies now expect you to know how to use a computer, they expect you to understand how to use Excel, they expect you to browse the web without any issue.\nIf you think that data will be different, then I beg you to reconsider your priorities. We see companies investing in data democratization like there\u0026rsquo;s no tomorrow. Teaching their employees how to handle data, interpret it, and use it to enhance their operations.\nThey have seen the value that data brings to the business. They know how important it is to make informed decisions and develop strategies around data becoming the norm.\nIf you think Data Science will be reserved for teams who know how to handle data, then I\u0026rsquo;m afraid you\u0026rsquo;re wrong. If you want to succeed in your business, whether you are a data analyst or a Chief Data Officer of an organization, you have to push for data democratization.\nAnd with this, I learned a valuable lesson: If you don\u0026rsquo;t invest in data democratization, your strategies will be superseded for a company that does it.\nConclusion and recommendations If you see yourself in one of these points, then let me give you some recommendations:\n Hire a data engineer first: This person will lay the foundations for analysts and data scientists to scale their operations with ease. It\u0026rsquo;s one of the best single decisions you can make. Layout your data strategy: Think about your data strategy, try to find loopholes in it. Think about how you are going to move data (Apache Airflow?), how are you going to analyze data (Deepnote?), how are you going to deploy products (MLFlow?) Build with a business case: Make a quick checklist with objectives, risks involved, mitigation plans, what-ifs, and so on. Prepare to scale data: Invest in people education. Teach them Python or R; SQL is a must these days. These are people who are going to be at the front of your business. Maybe they are in marketing, sales, or other operations, but they need to read and manipulate data.  I hope you liked the entry. Follow me on Twitter if you want to read more entries like this 👉🏻 Follow @__franccesco Also, share this article if you found it interesting. See you soon.\n","permalink":"https://codingdose.info/posts/data-science-reality/","summary":"The wrong path to data science Let me give you some context first. A few years ago, I was on the road to data science. I wanted to learn everything about this field, the sole idea of building something intelligent that can help someone predict something amazed me.\nInspired by this idea, I decided I wanted to become a data scientist, and like many others, I jumped from engineering to this new landscape.","title":"The 4 Hard Truths Data Science Blogs Don't Teach You About"},{"content":"Logs play a very important role throughout the entire life cycle of an application development as well as troubleshooting and replicating bugs on production that could lead to service interruption and harm our user\u0026rsquo;s experience.\nA few months ago, I went on a journey for finding a tool that will allow me to improve logs visibility and to take action as quickly as possible, and of course with a minimum amount of effort and server requirements. I found many of them, the vast majority very appealing with endless features to the point where it started feeling somewhat overwhelmed. None of these tools however, were easy to set up and they all required a learning curve to take advantage of it\u0026rsquo;s full potential. Not to mention, the majority weren\u0026rsquo;t free and pricing will range depending on the retention period, number of instances, license, etc.\nMy goal was to simply run a command, get what I needed and continue with my life. All in sudden, I was hit with the aha moment! Why don\u0026rsquo;t I write a simple tool that attempts to solve the problem? So here I am, a few months after, sharing my approach at tackling this problem.\n@sherlog/cli requires node \u0026gt;= 12.\nFor this example, I\u0026rsquo;m using nvm to install the minimum required version\nRun the following commands in your terminal:\n$ nvm install v12.16.1 $ npm install -g @sherlog/cli Initialize the project\n$ sherlog init The previous command generates a .sherlog config file in your current working directory (no biggie, just another JSON). Fill in the blanks. Once configured, it should look similar to this.\n{ \u0026#34;hostname\u0026#34;: \u0026#34;192.168.10.108\u0026#34;, \u0026#34;backpressure\u0026#34;: 1000, \u0026#34;chunks\u0026#34;: 500, \u0026#34;compression\u0026#34;: true, \u0026#34;files\u0026#34;: [{ \u0026#34;metric\u0026#34;: \u0026#34;nginx\u0026#34;, \u0026#34;file\u0026#34;: \u0026#34;/var/log/nginx/access.log\u0026#34;, \u0026#34;eventType\u0026#34;: \u0026#34;http\u0026#34;, \u0026#34;timezone\u0026#34;: \u0026#34;UTC\u0026#34;, \u0026#34;fromBeginning\u0026#34;: true }] } This file can be committed to your repository to speed up the process next time you need to check your logs on different environments. Lets go ahead and start the service.\n$ sherlog start This will output the following in your terminal\nSherlog listening on: - Dashboard: http://localhost:8000 - Local: ws://localhost:8000 - Network: ws://192.168.10.108:8000 Navigate to the dashboard url http://localhost:8000\nNow you can navigate through your logs as if you were watching a YouTube video going back and forth in case you missed something.\n@sherlog/cli supports the following default log formats out of the box:\n Apache2  HTTP Error   Monolog (e.g. Laravel) Mysql  General   Nginx  HTTP Error   PHP-fpm Redis  That\u0026rsquo;s it for now folks, if you wish to get updates about @sherlog/cli and possible use cases you can follow me on Twitter @Brucelampson or feel free to submit a pull request to the project on GitHub sherl0g\n","permalink":"https://codingdose.info/posts/monitoring-nginx-with-sherlog/","summary":"Logs play a very important role throughout the entire life cycle of an application development as well as troubleshooting and replicating bugs on production that could lead to service interruption and harm our user\u0026rsquo;s experience.\nA few months ago, I went on a journey for finding a tool that will allow me to improve logs visibility and to take action as quickly as possible, and of course with a minimum amount of effort and server requirements.","title":"Monitoring Nginx with @sherlog/cli"},{"content":"If you find yourself having troubles debugging your code, or wondering what went wrong, then you should start logging events in your python code.\nUsing the logging library you can basically record what actions is your code doing, i.e making a web request, reading a file, monitoring something, etc. It can help you to narrow down your faulty code for debugging.\nMoreover, logging is not only helpful for debugging, but it is also helpful for collaboration, and many platforms use the logging module in your code so you navigate between events easily. It will not only help you on your projects, but also in professional environments.\nThe logging module You can start by importing the logging library in your python shell and after importing it you can log different event levels such as INFO, WARNING and ERROR.\nIn [1]: import logging In [2]: logging.info(\u0026#39;Hello\u0026#39;) In [3]: logging.warning(\u0026#39;Something might happen...\u0026#39;) WARNING:root:Something might happen... In [4]: logging.error(\u0026#39;Something BAD happened!\u0026#39;) ERROR:root:Something BAD happened! I assume you noticed that our info event wasn\u0026rsquo;t logged. This is because the default logging level is set to record events equals to warnings and above. There are 4 types of levels you should know about.\n   Level Attribute Code     Debug logging.DEBUG 10   Info logging.INFO 20   Warning logging.WARNING 30   Error logging.ERROR 40   Critical logging.CRITICAL 50   Fatal logging.FATAL 50    We can allow the logging module to log info records using the setLevel() function.\nIn [9]: logging.getLogger().setLevel(logging.INFO) In [10]: logging.info(\u0026#39;info!\u0026#39;) INFO:root:info! However, what we should be doing, is to instantiate a logger class. This will allow us to set up the format, a handler and the logging level without having to call it from the main module.\n Loggers have the following attributes and methods. Note that Loggers should NEVER be instantiated directly, but always through the module-level function logging.getLogger(name). Multiple calls to getLogger() with the same name will always return a reference to the same Logger object.\n— Python 3 Docs\n Loading a logger We can instantiate logger using the method getLogger(), we just have to provide a name for that logger.\nUsually what you will want to do is to provide a name to the getLogger() function, this name can be the special variable __name__. This way you can identify from which module is the log coming from.\nIn [1]: import logging In [2]: logger = logging.getLogger(__name__) After you have instantiated your logger class, you have to define a handler. Now, a handler is what tells the logger where it should store the logs. The two most common options are the FileHandler and the StreamHandler, but there are other options as well.\nIn [1]: import logging In [2]: logger = logging.getLogger(__name__) In [3]: logger.setLevel(logging.INFO) In [4]: stream_handler = logging.StreamHandler() In [5]: logger.addHandler(stream_handler) In [6]: logger.info(\u0026#39;I\\\u0026#39;m displaying info!\u0026#39;) I\u0026#39;m displaying info! Changing logger format However, I would like to add more formatting to the logs, in this case, a time, a level name, and a message will suffice. Luckily, this is fairly easy using the Formatter class.\nIn [7]: log_format = logging.Formatter(\u0026#34;%(asctime)s- %(levelname)s: %(message)s\u0026#34;) In [8]: stream_handler.setFormatter(log_format) In [9]: logger.info(\u0026#39;I\\\u0026#39;m displaying info!\u0026#39;) 2020-05-25 20:18:25,322 - INFO: I\u0026#39;m displaying info! You can see a list of attribute names in the python documentation here.\nPutting it all together Let\u0026rsquo;s gather our thoughts and put this scenario. Let\u0026rsquo;s say we have a TXT file in which we have some Urls and we have build a script that goes through each one of them to test if they are up and running, throwing error codes (HTTP 5xx) or maybe completely down.\nWe also want to log everything into a file, more specifically INFO levels and up, but we also want to log only WARNING events into our terminal so we don\u0026rsquo;t fill it up with information we don\u0026rsquo;t care about.\nTo do this we are going to split our script into 2 modules. Is not necessary but I don\u0026rsquo;t want to cramp it up everything into a single file. Let\u0026rsquo;s first create a TXT file and fill it out with Urls:\nhttps://httpstat.us/ https://httpstat.us/500 https://httpstat.us/400 https://localhost/ Now, let\u0026rsquo;s create a python script and call it load_logger.py. Here we\u0026rsquo;ll define our logger instance, the formatter, and handlers appended to it with their respective logging level.\nimport logging def load_logger() -\u0026gt; logging.Logger: \u0026#34;\u0026#34;\u0026#34;Return a logger instance.\u0026#34;\u0026#34;\u0026#34; logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) file_handler = logging.FileHandler(\u0026#34;status.log\u0026#34;) stream_handler = logging.StreamHandler() file_handler.setLevel(logging.INFO) stream_handler.setLevel(logging.WARNING) log_format = logging.Formatter(\u0026#34;%(asctime)s- %(levelname)s: %(message)s\u0026#34;) file_handler.setFormatter(log_format) stream_handler.setFormatter(log_format) logger.addHandler(file_handler) logger.addHandler(stream_handler) return logger Now that we have defined our logger, let\u0026rsquo;s create our main script site_monitor.py. What this file will do is:\n Load the event logger Read our file domains.txt Test the status code of each destination Log the event of the status code  import logging import requests from sys import argv from sys import exit from load_logger import load_logger def main(domains_file: str) -\u0026gt; None: \u0026#34;\u0026#34;\u0026#34;Check for a number of HTTP codes and log them.\u0026#34;\u0026#34;\u0026#34; logger = load_logger() with open(domains_file, \u0026#34;r\u0026#34;) as file: logger.info(f\u0026#34;Reading file: {domains_file}\u0026#34;) domains = file.read().splitlines() for url in domains: logger.info(f\u0026#34;Sending get request to: {url}\u0026#34;) try: req = requests.get(url) except Exception as e: logger.error(f\u0026#34;Destination unreachable for url: {url}\u0026#34;) exit(1) else: if req.ok: logger.info(f\u0026#34;Status OK for url: {url}\u0026#34;) else: logger.warning(f\u0026#34;Received status code: {req.status_code}\u0026#34;) if __name__ == \u0026#34;__main__\u0026#34;: main(argv[1]) To run our script you can call it using the command line:\n$ python site_monitor.py domains.txt After that you will see that only WARNING events are being recorded in the console, but in our status.log file you will see INFO logs and up.\nIn this example, I\u0026rsquo;m executing code and I\u0026rsquo;m using the command tail with the flag -f to see what\u0026rsquo;s being recorded in that file.\n As you can see, in our example we only log WARNING events in our terminal, but we are able to dump everything in our status.log file.\nIf you want to take a closer look at the code, I\u0026rsquo;ve set up a sample repository at: https://github.com/franccesco/status_monitor_example\nFeel free to reach out if you have any doubts.\n","permalink":"https://codingdose.info/posts/logging-events-in-python/","summary":"If you find yourself having troubles debugging your code, or wondering what went wrong, then you should start logging events in your python code.\nUsing the logging library you can basically record what actions is your code doing, i.e making a web request, reading a file, monitoring something, etc. It can help you to narrow down your faulty code for debugging.\nMoreover, logging is not only helpful for debugging, but it is also helpful for collaboration, and many platforms use the logging module in your code so you navigate between events easily.","title":"Python Logging Basics: Why is it important and how to use it?"},{"content":"Sometimes you just want to save a dictionary, a list, a string, or anything into a file so you can use it later. This can be easily achieved with the module Pickle.\n Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source. — Pickle Documentation\n What is Pickle Pickle is a module in Python that can be implemented to serialize or de-serialize a Python object, meaning that it can be used to save an object into a file; Just have in mind that this is not the same as saving a configuration file, there are other data structures that we can use to achieve that task such as JSON, CSV, YAML/TOML, etc.\nHow to save an dictionary with Pickle Saving an object with Pickle is really easy, all you need to do is to save the object into a file providing the object and the filename, here\u0026rsquo;s a quick snippet:\nimport pickle dictionary = {\u0026#39;string_1\u0026#39;: 1, \u0026#39;string_2\u0026#39;: 2.2, \u0026#39;string_3\u0026#39;: True} # Pickling (serializing) a dictionary into a file with open(\u0026#39;saved_object.pickle\u0026#39;, \u0026#39;wb\u0026#39;) as filename: pickle.dump(dictionary, filename) And there you go! You have saved the contents of an object into a file. Just remember to always save the file as binary providing the arguments w (writing) and b (binary)\nLoading a dictionary If you have used JSON before then you can find the syntax to be very familiar. We can load a previously pickled object using the .load() method and providing a filename:\nimport pickle # Unpickling (de-serializing) a dictionary with open(\u0026#39;saved_object.pickle\u0026#39;, \u0026#39;rb\u0026#39;) as filename: dictionary = pickle.load(filename) print(dictionary) # \u0026gt;\u0026gt;\u0026gt; {\u0026#39;string_1\u0026#39;: 1, \u0026#39;string_2\u0026#39;: 2.2, \u0026#39;string_3\u0026#39;: True} As you can see, our dictionary object was loaded correctly. To double-check this, here\u0026rsquo;s a script that compares if the contents of a dictionary and a saved pickle are the same:\nimport pickle dictionary_a = {\u0026#39;string_1\u0026#39;: 1, \u0026#39;string_2\u0026#39;: 2.2, \u0026#39;string_3\u0026#39;: True} # Pickling (serializing) dictionary A into a file with open(\u0026#39;saved_object.pickle\u0026#39;, \u0026#39;wb\u0026#39;) as filename: pickle.dump(dictionary_a, filename) # Unpickling (de-serializing) dictionary A into B with open(\u0026#39;saved_object.pickle\u0026#39;, \u0026#39;rb\u0026#39;) as filename: dictionary_b = pickle.load(filename) # Dictionaries A and B remaing the same print(f\u0026#39;Is dictionary_a == dictionary_b?: {dictionary_a == dictionary_b}\u0026#39;) # \u0026gt;\u0026gt;\u0026gt; Is dictionary_a == dictionary_b?: True # Dictionaries have the same content print(f\u0026#39;Dictionary A: {dictionary_a}\u0026#39;) print(f\u0026#39;Dictionary B: {dictionary_b}\u0026#39;) # \u0026gt;\u0026gt;\u0026gt; Dictionary A: {\u0026#39;string_1\u0026#39;: 1, \u0026#39;string_2\u0026#39;: 2.2, \u0026#39;string_3\u0026#39;: True} # \u0026gt;\u0026gt;\u0026gt; Dictionary B: {\u0026#39;string_1\u0026#39;: 1, \u0026#39;string_2\u0026#39;: 2.2, \u0026#39;string_3\u0026#39;: True} What kind of data can be Pickled? According to the official documentation, this is the list of object types that can be pickle/unpickled:\n None, True, and False integers, floating point numbers, complex numbers strings, bytes, bytearrays tuples, lists, sets, and dictionaries containing only picklable objects functions defined at the top level of a module (using def, not lambda) built-in functions defined at the top level of a module classes that are defined at the top level of a module instances of such classes whose dict or the result of calling getstate() is picklable (see section Pickling Class Instances for * details).  This is a very interesting approach if you need to save an object into a file. Just beware that there\u0026rsquo;s a lot of data structures out there than can help you to save a configuration file. But if you need to save an object, then this seems the right choice!\nFurther Reading  Pickle — Python object serialization  ","permalink":"https://codingdose.info/posts/save-your-dictionaries-lists-tuples-and-other-objects-with-pickle/","summary":"Sometimes you just want to save a dictionary, a list, a string, or anything into a file so you can use it later. This can be easily achieved with the module Pickle.\n Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source. — Pickle Documentation\n What is Pickle Pickle is a module in Python that can be implemented to serialize or de-serialize a Python object, meaning that it can be used to save an object into a file; Just have in mind that this is not the same as saving a configuration file, there are other data structures that we can use to achieve that task such as JSON, CSV, YAML/TOML, etc.","title":"Save Python Objects with Pickle"},{"content":"I\u0026rsquo;ve been trying to publish my packages to PyPi so people can access my software more easily. But I have to be honest, Python\u0026rsquo;s publishing system is not the best out there and it has to improve quite a lot.\nWandering around I stumbled upon Poetry, which is a python packager and dependency manager created by Sébastien Eustace for people that doesn\u0026rsquo;t want to lose their head managing a Python project.\nLet\u0026rsquo;s say that we want to make a small command line application that checks the status of a web page, let\u0026rsquo;s call it checkstat. So how can we create, develop, and publish our software with Poetry?\nInstalling Poetry You can install poetry effortlessly with a python script provided by the creator.\n$ curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python Now we are able to see Poetry\u0026rsquo;s options.\n$ poetry Poetry 0.12.14 Usage: command [options] [arguments] Options: -h, --help Display this help message -q, --quiet Do not output any message -V, --version Display this application version --ansi Force ANSI output --no-ansi Disable ANSI output -n, --no-interaction Do not ask any interactive question -v|vv|vvv, --verbose[=VERBOSE] Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug Available commands: about Short information about Poetry. add Add a new dependency to pyproject.toml. build Builds a package, as a tarball and a wheel by default. check Checks the validity of the pyproject.toml file. config Sets/Gets config options. --- SNIP --- debug debug:info Shows debug information. debug:resolve Debugs dependency resolution. self self:update Updates poetry to the latest version. Creating our package and adding dependencies First we have to create our package with the new sub-command, this will create a file structure in which we can work on.\n# Create a new package $ poetry new checkstat Created package checkstat in checkstat # Enter the new package directory $ cd checkstat # Prints the directory tree $ tree . ├── checkstat │ └── __init__.py ├── pyproject.toml ├── README.rst └── tests ├── __init__.py └── test_checkstat.py 2 directories, 5 files As we can see, we only need the sub-command new followed by a name argument to create a directory structure for our little project. This will also generate a configuration file for our project called pyproject.toml which is a much better way to define your project configuration than setup.py, specially for people who are new to python, you can read more about this implementation here.\n[tool.poetry] name = \u0026#34;checkstat\u0026#34; version = \u0026#34;0.1.0\u0026#34; description = \u0026#34;\u0026#34; authors = [\u0026#34;Franccesco Orozco \u0026lt;franccesco@codingdose.info\u0026gt;\u0026#34;] [tool.poetry.dependencies] python = \u0026#34;^3.7\u0026#34; [tool.poetry.dev-dependencies] pytest = \u0026#34;^3.0\u0026#34; As you can see, it describes basic information about the project, dependencies and development dependencies. We will get back to the TOML file shortly.\nWhat we want to do next is to create the virtual environment of our project and install the development dependencies with the command install.\n# Create package isolated virtualenv and install dependencies $ poetry install Now that we have installed the requirements and initiated the virtual environment let\u0026rsquo;s add more libraries to our project, shall we? Let\u0026rsquo;s add Click, Colorama and Requests.\n# Add dependencies $ poetry add click colorama requests There you go, now if we check the our pyproject.toml again, you can see that it has the new package dependencies.\n[tool.poetry.dependencies] python = \u0026#34;^3.7\u0026#34; click = \u0026#34;^7.0\u0026#34; colorama = \u0026#34;^0.4.1\u0026#34; requests = \u0026#34;^2.22\u0026#34; Developing our module and our CLI We\u0026rsquo;re ready to develop our checkstat module and command line interface. Let\u0026rsquo;s start by making a test with Pytest. For this, let\u0026rsquo;s open test_checkstat.py inside our tests folder.\nHere we\u0026rsquo;re going to set an expectation. Our module checkstat should have a method called is_up which should return True if the webpage returns the code 200 for any other HTTP code it should return False. Here\u0026rsquo;s the test:\nimport checkstat def test_checkstat(): \u0026#34;\u0026#34;\u0026#34;Test if checkstat module returns True on 200 code.\u0026#34;\u0026#34;\u0026#34; assert checkstat.is_up(\u0026#39;https://codingdose.info\u0026#39;) Now if we run this test with pytest it will show red because we haven\u0026rsquo;t build any modules yet, but for the sake of demonstration, let\u0026rsquo;s show the red test first.\n  Remember: In order to use the packages inside your virtual environment, you have to activate it first. With Poetry, you can either use poetry run _command_ or simply activate it once using poetry shell.\n  # Running our test $ poetry run pytest Now, let\u0026rsquo;s build the module shall we? Let\u0026rsquo;s make a file called checkstat.py under the checkstat directory and define a is_up method that returns True if the webpage is reachable and returns a 200 HTTP code.\n# checkstat/checkstat.py import requests def is_up(webpage): \u0026#34;\u0026#34;\u0026#34;Return True if 200 code was received, else return False.\u0026#34;\u0026#34;\u0026#34; try: req = requests.get(webpage) # On connection error, return False. except requests.exceptions.ConnectionError: return False # Connection was successful, return True on 200 code, else return False. else: if req.status_code == 200: return True return False Now let\u0026rsquo;s add our module to our initialization package so it gets correctly loaded.\n# checkstat/__init__.py from .checkstat import is_up Let\u0026rsquo;s re-run our test to see if its passing now.\nPerfect, now that our test is passing we can go ahead and make our command line interface. Let\u0026rsquo;s create a file called cli.py inside our checkstat folder.\n# checkstat/cli.py import click import checkstat @click.command() @click.argument(\u0026#39;host\u0026#39;) def main(host): \u0026#34;\u0026#34;\u0026#34;CLI for checkstat package.\u0026#34;\u0026#34;\u0026#34; print(\u0026#39;Status: \u0026#39;, end=\u0026#39;\u0026#39;) if checkstat.is_up(host): click.secho(\u0026#39;Up and running.\u0026#39;, bold=True, fg=\u0026#39;green\u0026#39;) else: click.secho(\u0026#39;Server is down.\u0026#39;, bold=True, fg=\u0026#39;red\u0026#39;) if __name__ == \u0026#39;__main__\u0026#39;: main() Perfect, now we have completed our CLI, but we haven\u0026rsquo;t tried it yet, how do we make it run on the terminal without calling python x_file.py? Easy-peasy, let\u0026rsquo;s edit the pyproject.toml and add a script section.\n[tool.poetry] name = \u0026#34;checkstat\u0026#34; version = \u0026#34;0.1.0\u0026#34; description = \u0026#34;\u0026#34; authors = [\u0026#34;Franccesco Orozco \u0026lt;franccesco.orozco@codingdose.info\u0026gt;\u0026#34;] --- SNIP --- # New section [tool.poetry.scripts] checkstat = \u0026#34;checkstat.cli:main\u0026#34; What does checkstat.cli:main means? It means: Hey python, whenever I type checkstat I want you to execute the function main() inside of the module cli.py in the package checkstat.\nNow that we have updated our configuration file, our CLI gets automatically loaded into our virtual environment and we can execute it directly with Poetry. Let\u0026rsquo;s test it ourselves with a working webpage and another one that returns a 500 HTTP status.\n$ poetry run checkstat https://codingdose.info Status: Up and running. $ poetry run checkstat https://httpstat.us/500 Status: Server is down. Awesome! Our little command line application is ready to be published.\nPublishing our tool Publishing our tool to the PyPi is really effortless, let\u0026rsquo;s try to do it right now. First of all we have to build our package.\n$ poetry build Building checkstat (0.1.0) - Building sdist - Built checkstat-0.1.0.tar.gz - Building wheel - Built checkstat-0.1.0-py2.py3-none-any.whl Now we have to upload it to the pypi.org repository.\n$ poetry publish Username: franccesco Password: ********** - Uploading checkstat-0.1.0-py2.py3-none-any.whl 100% - Uploading checkstat-0.1.0.tar.gz 100% And that\u0026rsquo;s it! Our tools is now available to anyone using pip to install packages. Let\u0026rsquo;s check it out: https://pypi.org/project/checkstat/\nInstalling our tool Let\u0026rsquo;s go and install our tool to see if it\u0026rsquo;s available now.\n$ pip install checkstat And let\u0026rsquo;s execute it.\nIt works!\nConclusion I hope you like the entry, today we have learn how to create, develop and ship a very basic CLI. Poetry is an incredibly easy to use and flexible tool to create your packages and distribute them. Of course, there\u0026rsquo;s a lot more than I have shown here, remember to check out Poetry\u0026rsquo;s documentation.\nFurther reading  Poetry Poetry\u0026rsquo;s Documentation Command Line Applications Structuring Your Project Requests Click Pytest  ","permalink":"https://codingdose.info/posts/develop-and-publish-with-poetry/","summary":"I\u0026rsquo;ve been trying to publish my packages to PyPi so people can access my software more easily. But I have to be honest, Python\u0026rsquo;s publishing system is not the best out there and it has to improve quite a lot.\nWandering around I stumbled upon Poetry, which is a python packager and dependency manager created by Sébastien Eustace for people that doesn\u0026rsquo;t want to lose their head managing a Python project.","title":"Develop and Publish Your Python Packages with Poetry"},{"content":"We\u0026rsquo;ve all been there, your code is performing a job and it\u0026rsquo;s going to take a while. I\u0026rsquo;m an impatient guy, so it would be nice to have an ETA or a progress bar to show us. Fortunately, there are libraries out there than can help us to achieve this!\nThere\u0026rsquo;s two ways in which we can integrate a progress bar into our loops, via a Context Manager or just wrapping up an iterable object into a method.\nWe\u0026rsquo;re going to be testing Progress, ProgressBar2, TQDM, Click and Clint, so make sure to create your testing environment with Pipenv:\n$ mkdir progressbar-testing $ cd progressbar-testing $ pipenv install tqdm progressbar2 click clint Progress While I was testing each library, this is one I really liked, and this is because it has a lot of progress bar styles that you can play with. We\u0026rsquo;re not going to look at all of them, but you can take a look at the source code and documentation if you have further questions.\nProgress - BAR This is the most basic one, is basically a progress bar that is being filled by a hash. It works pretty easily with a context manager, you can use this snippet as an example:\nfrom time import sleep from progress.bar import Bar with Bar(\u0026#39;Processing...\u0026#39;) as bar: for i in range(100): sleep(0.02) bar.next() As you can see, in this case we only import the Bar class and we add a label to our progress bar and it automatically handles our loop. At the end of each iteration, we have to append the .next() method to our bar object so we can update the progress bar.\n Progress - PixelBar We can achieve the same with other progress bar styles, let\u0026rsquo;s try the Pixel Bar:\nfrom time import sleep from progress.bar import PixelBar with PixelBar(\u0026#39;Processing...\u0026#39;) as bar: for i in range(100): sleep(0.02) bar.next()   Progress - PixelSpinner Sometimes you don\u0026rsquo;t know how long it might take to perform an operation. If this is the case, then you can use the Pixel Spinner to display a pixel spinner (duh!) without an actual progress bar.\nfrom time import sleep from progress.spinner import PixelSpinner with PixelSpinner(\u0026#39;Processing...\u0026#39;) as bar: for i in range(100): sleep(0.06) bar.next()   Pretty neat right? You can read more about Progress here:\n Source Code   TQDM TQDM, a short for taqadum which means progress in Arabic, is a very popular choice, even more for data scientist or analysts, as it provides a really fast framework with a lot of customizations and information for you to work with.\nIt\u0026rsquo;s also smart enough to use it with barely two lines of code. Just provide an iterable to the function tqdm() and your good to go.\nfrom tqdm import tqdm from time import sleep for i in tqdm(range(100)): sleep(0.02) And there you go! You have a lot of information on your progress bar such as a percentage, the length of your iterable, an ETA and even iterables per seconds!\n You can also add a label to your progress bar, displaying each object along the way:\nimport string from tqdm import tqdm from time import sleep # A list from A to Z wrapped around TQDM function progress_bar = tqdm(list(string.ascii_lowercase)) for letter in progress_bar: progress_bar.set_description(f\u0026#39;Processing {letter}...\u0026#39;) sleep(0.09)   TQDM, from my perspective, is a really important project if you\u0026rsquo;re into Data Science or need a incredibly fast way to show progress to your operations, you can read more about it here:\n Source Code \u0026amp; Documentation Wiki   Click Click is one of the best libraries out there to create a Command Line Interface to your apps or libraries. I cannot recommend it enough!\nIt also includes a very simple progress bar as an utility, you can use it inside a context manager, just like this:\nimport click from time import sleep # Fill character is # by default, you can change it # for any other char you want, or even change the color. fill_char = click.style(\u0026#39;=\u0026#39;, fg=\u0026#39;yellow\u0026#39;) with click.progressbar(range(100), label=\u0026#39;Loading...\u0026#39;, fill_char=fill_char) as bar: for i in bar: sleep(0.02) You can also see that we can change the color of the progress bar meter, and we also have an ETA.\n If you want to know more about the progress bar utility in Click, you can check it out here:\n Click - Showing Progress Bars   ProgressBar2 This is also a very popular choice and one that is easy to use. It also work with widgets to calculate the current progress, such as AbsoluteETA, AdaptiveETA, AdaptiveTransferSpeed and others which are very interesting.\nIt\u0026rsquo;s implementation is also very simple. As with TQDM, only need 2 lines of code are enough to get us started:\nfrom time import sleep from progressbar import progressbar for i in progressbar(range(100)): sleep(0.02) And yes, although you install it as progressbar2 make sure you import it as progressbar. Here\u0026rsquo;s how it displays the progress bar by default.\n You can checkout the ProgressBar2 documentation and homepage here:\n Homepage Documentation Widgets   Clint And lastly we have Clint, which stands for C ommand L ine IN terface T ools, which is not maintained anymore, but I will show it here just to pay my respects.\nCreating a progress bar is just as easy as we have seen with the other tools, this one doesn\u0026rsquo;t require a context manager. Here we can see a regular progress bar and a Mill style progress bar:\nfrom time import sleep from clint.textui import progress print(\u0026#39;Clint - Regular Progress Bar\u0026#39;) for i in progress.bar(range(100)): sleep(0.02) print(\u0026#39;Clint - Mill Progress Bar\u0026#39;) for i in progress.mill(range(100)): sleep(0.02) And here you can see it in action:\n  There are other libraries out there but these are the ones that I definitely recommend you to check out. Also, here\u0026rsquo;s a snippet of code that tests all of the progress bar styles and libraries that we have tested so far. Just make sure to install the appropriate libraries.\nYou can clone it with Git:\n$ git clone https://gist.github.com/33e56c93c3c43cf70f19ecbfc921e358.git progressbar-testing # To test these progress bars you will have to # install the following packages # pipenv install click progress progressbar2 tqdm clint import string # progress bars import time import click from tqdm import tqdm from progress.bar import Bar from progress.bar import PixelBar from progress.spinner import PixelSpinner from progressbar import progressbar from clint.textui import progress click.secho(\u0026#39;Progress - BAR\u0026#39;, bold=True) with Bar(\u0026#39;Processing...\u0026#39;) as bar: for i in range(100): time.sleep(0.02) bar.next() click.secho(\u0026#39;Progress - PixelBar\u0026#39;, bold=True) with PixelBar(\u0026#39;Processing...\u0026#39;) as bar: for i in range(100): time.sleep(0.02) bar.next() click.secho(\u0026#39;Progress - PixelSpinner\u0026#39;, bold=True) with PixelSpinner(\u0026#39;Processing...\u0026#39;) as bar: for i in range(100): time.sleep(0.02) bar.next() click.secho(\u0026#39;\\nProgressbar2\u0026#39;, bold=True) for i in progressbar(range(100), redirect_stdout=True): time.sleep(0.02) click.secho(\u0026#39;\\nTQDM\u0026#39;, bold=True) for i in tqdm(range(100)): time.sleep(0.02) click.secho(\u0026#39;TQDM - With description\u0026#39;, bold=True) pbar = tqdm(list(string.ascii_lowercase)) for letter in pbar: pbar.set_description(f\u0026#39;Processing {letter}...\u0026#39;) time.sleep(0.09) click.secho(\u0026#39;\\nClick\u0026#39;, bold=True) fill_char = click.style(\u0026#39;=\u0026#39;, fg=\u0026#39;yellow\u0026#39;) with click.progressbar(range(100), label=\u0026#39;Loading...\u0026#39;, fill_char=fill_char) as bar: for i in bar: time.sleep(0.02) click.secho(\u0026#39;\\nClint\u0026#39;, bold=True) for i in progress.bar(range(100)): time.sleep(0.02) click.secho(\u0026#39;Clint - Mill\u0026#39;, bold=True) for i in progress.mill(range(100)): time.sleep(0.02) Also, I should recall that all these libraries have more features than the ones that we have seen here, so make sure to check them out. Have fun!\n","permalink":"https://codingdose.info/posts/how-to-use-a-progress-bar-in-python/","summary":"We\u0026rsquo;ve all been there, your code is performing a job and it\u0026rsquo;s going to take a while. I\u0026rsquo;m an impatient guy, so it would be nice to have an ETA or a progress bar to show us. Fortunately, there are libraries out there than can help us to achieve this!\nThere\u0026rsquo;s two ways in which we can integrate a progress bar into our loops, via a Context Manager or just wrapping up an iterable object into a method.","title":"How to Easily Use a Progress Bar in Python"},{"content":"I\u0026rsquo;m going to show you how to create a project page using the ever popular Jekyll and GitHub Pages so your projects can have a face, and of course, to show it of to your friends out there.\nSo, without further ado, we should get to the point already, shall we?\nBuilding our site with Jekyll Let\u0026rsquo;s say that you have a project called chuck-says that acts like a fortune cookie + cowsay whenever you call it in the command line.\nTo provide our repository with a project page, we can use Jekyll, which is a tool made in Ruby that can use a combination of Markdown and Font Matter to create static websites and blogs. It\u0026rsquo;s a very easy to use site generator that can aid us with what we want to accomplish in this case. We can install it using the gem command:\n$ gem install jekyll Now, let\u0026rsquo;s go to the folder in which our project is located. There, we can create our project\u0026rsquo;s page on the folder docs. There\u0026rsquo;s no need to create the folder as Jekyll can do this for us:\n$ jekyll new docs Running bundle install in /home/your_username/chuck-says/docs... Bundler: Fetching gem metadata from https://rubygems.org/........... Bundler: Fetching gem metadata from https://rubygems.org/. Bundler: Resolving dependencies... Bundler: Using public_suffix 3.1.0 --- SNIP --- After that we can see our page structure, which can be something like this:\n$ tree docs/ . ├── 404.html ├── about.md ├── _config.yml ├── Gemfile ├── Gemfile.lock ├── index.md └── _posts └── 2019-06-09-welcome-to-jekyll.markdown 1 directory, 7 files Here\u0026rsquo;s a brief description of each one of them, the bold one\u0026rsquo;s, such as config.yml and index.md are the ones that we\u0026rsquo;re actually going to use!\n   File Description     404.html Not Found template   about.md A short about page   config.yml Configuration file   Gemfile Dependency gem requirements and configuration   Gemfile.lock Pinned gem versions   index.md Front Page (And the one we\u0026rsquo;ll use!)   posts/ Folder containing blog posts (we won\u0026rsquo;t use them!)    As we\u0026rsquo;re not trying to create a blog (we can see how to do it on a another post), we can remove about.md and the posts/ folder.\nBefore we move on, there\u0026rsquo;s something important we need to do first! We have to add the github-pages gem, provided by GitHub, in order to have access to themes and to ensure maximum compatibility to make our site deploy on GitHub Pages. Let\u0026rsquo;s open up Gemfile and add the gem as follows:\n# Main dependencies and group definitions source \u0026#34;https://rubygems.org\u0026#34; gem \u0026#34;jekyll\u0026#34;, \u0026#34;~\u0026gt; 3.8.5\u0026#34; gem \u0026#34;minima\u0026#34;, \u0026#34;~\u0026gt; 2.0\u0026#34; group :jekyll_plugins do gem \u0026#34;jekyll-feed\u0026#34;, \u0026#34;~\u0026gt; 0.6\u0026#34; end gem \u0026#34;tzinfo-data\u0026#34;, platforms: [:mingw, :mswin, :x64_mingw, :jruby] gem \u0026#34;wdm\u0026#34;, \u0026#34;~\u0026gt; 0.1.0\u0026#34; if Gem.win_platform? # Add the GitHub Pages gem gem \u0026#39;github-pages\u0026#39;, group: :jekyll_plugins Now, let\u0026rsquo;s update our packages:\n# Remember to to this inside the /docs folder $ bundle update Fetching gem metadata from https://rubygems.org/.......... Fetching gem metadata from https://rubygems.org/. Resolving dependencies... Using concurrent-ruby 1.1.5 ... Bundle updated! $ bundle install Using concurrent-ruby 1.1.5 Using i18n 0.9.5 ... Bundle complete! 5 Gemfile dependencies, 85 gems now installed. Use `bundle info [gemname]` to see where a bundled gem is installed. To ensure that we can run our page, let\u0026rsquo;s serve it locally with the serve or s subcommand:\n$ bundle exec jekyll serve Configuration file: /home/your_home/workspace/chuck-says/docs/_config.yml Source: /home/your_home/workspace/chuck-says/docs Destination: /home/your_home/workspace/chuck-says/docs/_site Incremental build: disabled. Enable with --incremental Generating... Jekyll Feed: Generating feed for posts done in 0.111 seconds. Auto-regeneration: enabled for \u0026#39;/home/your_home/workspace/chuck-says/docs\u0026#39; Server address: http://127.0.0.1:4000/ Server running... press ctrl-c to stop. There you go! You can now visit http://127.0.0.1:4000/ to see your Jekyll web page up and running!\nNow this is great, but this looks like a blog, and we\u0026rsquo;re not trying to make a blog here, we have to perform a few tweaks to get our one-page project website come to life.\nConfiguring Jekyll We got our site up and running, which is awesome, you can make blog posts! But we\u0026rsquo;re not looking for that. Instead, we want to create a one-page site for our project.\nTo achieve this, we\u0026rsquo;re going to open up _config.yml and make a few changes, as well as changing the main theme. We will want to apply the Cayman theme for our site. Here\u0026rsquo;s the configuration file:\ntitle: Your awesome title email: your-email@example.com description: \u0026gt;- # this means to ignore newlines until \u0026#34;baseurl:\u0026#34; Write an awesome description for your new site here. You can edit this line in _config.yml. It will appear in your document head meta (for Google search results) and in your feed.xml site description. baseurl: \u0026#34;\u0026#34; # the subpath of your site, e.g. /blog url: \u0026#34;\u0026#34; # the base hostname \u0026amp; protocol for your site, e.g. http://example.com twitter_username: jekyllrb github_username: jekyll # Build settings markdown: kramdown theme: minima plugins: - jekyll-feed We will want to change these variables, more importantly, the baseurl. For GitHub projects, your baseurl will be your repository name. If you change your repository name in the future, then remember to edit this line! Let\u0026rsquo;s make our changes:\ntitle: Chuck Says. email: franccesco@codingdose.info description: Replace weak fortune cookies with all-mighty Chuck Norris facts. baseurl: \u0026#34;/chuck-says\u0026#34; # the subpath of your site, e.g. /blog url: \u0026#34;\u0026#34; # If you have a domain name, then you should fill this out! twitter_username: __franccesco github_username: franccesco # Build settings markdown: kramdown theme: jekyll-theme-cayman # \u0026lt;\u0026lt; Set the Cayman theme! plugins: - jekyll-feed That was fast, right? But before we move on, it\u0026rsquo;s time for us to update the layout in our index.md file. You see, Jekyll works using a pre-defined layout that wraps the page in it, these are defined in the directory _layouts in a HTML file, luckily they are added automatically in the github-pages gem, so you won\u0026rsquo;t have to create one.\nLet\u0026rsquo;s add the default layout to our index.md page using Font Matter. Let\u0026rsquo;s open it up and change the format; After that, we can finally fill out our page!\n--- # You can copy and paste your `README.md` contents, or write something else! # Feel free to use your imagination here, just remember to respect the font matter division (`---`). layout: default # \u0026lt;\u0026lt; Change this line from \u0026#39;home\u0026#39; to \u0026#39;default\u0026#39; --- Chuck Norris Facts, right in your terminal! Because, who wouldn\u0026#39;t want that? You better... Get your day started as soon you see that sweet Chuck Norris fact of the day in your terminal; there\u0026#39;s more than 600+ facts here, baby! They\u0026#39;re all real... allegedly. --- SNIP --- Deploying our site with GitHub Pages After you have pushed your /docs folder to your repository with all of the changes we previously made, let\u0026rsquo;s configure GitHub so it can deploy our site. Fortunately, this is very easy.\n In your GitHub repository, click the Settings tab. Scroll down until you see GitHub Pages. Click the button under Source and select master branch /docs folder  After that, you should see Your site is published at https://yourusername.github.io/your-project/, it may take a few seconds to appear.\nAfter you have applied the settings, you should be able to check out your new project site at the URL that GitHub provides you.\nConclusion Jekyll is a very powerful site generator. You can use it to make a blog, a CV online, your project\u0026rsquo;s sites, and it doesn\u0026rsquo;t cost a dime to host it at GitHub pages! Here\u0026rsquo;s an example of how it looks: https://franccesco.github.io/chuck-says/\n\nPretty neat, right? You can check the example repository by clicking on the image. Also, if you\u0026rsquo;re new and you would like to make your first contribution in the GitHub community, check out the issues pages.\n I think I\u0026rsquo;ll make a post on how to create a blog using Jekyll, but we will also check out other static sites generators too. I hope you really liked the post, if you have any question then don\u0026rsquo;t hesitate to let me know.\n","permalink":"https://codingdose.info/posts/create-a-project-page-for-your-repositories-easily-with-jekyll/","summary":"I\u0026rsquo;m going to show you how to create a project page using the ever popular Jekyll and GitHub Pages so your projects can have a face, and of course, to show it of to your friends out there.\nSo, without further ado, we should get to the point already, shall we?\nBuilding our site with Jekyll Let\u0026rsquo;s say that you have a project called chuck-says that acts like a fortune cookie + cowsay whenever you call it in the command line.","title":"Create a Project Page for Your Repos Easily With Jekyll and GitHub Pages"},{"content":"I\u0026rsquo;ve been writing and focusing on Python lately and I\u0026rsquo;ve been wanting to make more content about Ruby. Ruby was my very first language and the one that got me into this programming world.\nFor this entry I\u0026rsquo;m going to write how to create, test and publish our gem to RubyGems.org to make it available for everyone, and in future entries we\u0026rsquo;re going to see how to setup a CI/CD for automatic testing and deployment, Behavior Driven Testing with Cucumber/Aruba and Code Coverage with SimpleCov.\nLet\u0026rsquo;s start with the basics. You can skip the basics clicking here\nWhat\u0026rsquo;s a ruby gem? A ruby gem is a piece of code that you can integrate to your software (made in ruby) to help you achieve some tasks more easily. Think of it as a library, because that\u0026rsquo;s exactly what it is!\nAn example of this would be requiring a gem that can make http requests for us. One gem (read library) that can perform this would be httparty.\nirb(main):001:0\u0026gt; require \u0026#39;httparty\u0026#39; =\u0026gt; true irb(main):002:0\u0026gt; response = HTTParty.get(\u0026#39;https://google.com\u0026#39;) irb(main):003:0\u0026gt; response.header =\u0026gt; #\u0026lt;Net::HTTPOK 200 OK readbody=true\u0026gt; Another good example is the gem Clipboard that allow us to copy, paste and clear the clipboard on Linux, MacOS and Windows.\nirb(main):001:0\u0026gt; require \u0026#39;clipboard\u0026#39; =\u0026gt; true irb(main):002:0\u0026gt; Clipboard.copy(\u0026#39;Hello world!\u0026#39;) =\u0026gt; \u0026#34;Hello world!\u0026#34; irb(main):003:0\u0026gt; Clipboard.paste =\u0026gt; \u0026#34;Hello world!\u0026#34; irb(main):004:0\u0026gt; Clipboard.clear =\u0026gt; \u0026#34;\u0026#34; How to install a gem Installing a gem is pretty straight forward, we can do this with the gem command line application provided by RubyGems, you shouldn\u0026rsquo;t worry about installing it as it comes bundled with Ruby since version 1.9:\n$ gem install _gem_name_here_  To install the Clipboard gem, then we can install it like this.\n$ gem install clipboard  The end users that are going to use our gem will also install our gem (or library) like this.\nDependency issues There\u0026rsquo;s only one issue with this implementation, and that is when you have two pieces of code that requires different versions of (let\u0026rsquo;s say) Clipboard. For example, SoftwareA requires the Clipboard gem version 0.5.8, and SoftwareB requires the version 1.1.2 of the same gem and it brings breaking changes as it is not backwards compatible with the previous versions of the gem.\nSoftwareA was installed first, so you have the Clipboard version 0.5.8 but as soon as you install the SoftwareB using the gem command line, it proceeds to install Clipboard\u0026rsquo;s newest version, which would be 1.1.2.\nAs this new version brings breaking changes and it\u0026rsquo;s not backwards compatible with previous versions, due to refactoring, renamed functions/method/classes, etc. It becomes pretty obvious that SoftwareA won\u0026rsquo;t work.\nYou reinstall the previous version of Clipboard 0.5.8 and it works now! But guess what, SoftwareB just broke. Welcome to the Dependency Hell.\nBundler comes to the rescue To resolve this issue, we need a sort of isolated environment where we can develop or deploy our software without meddling with the version numbers of our dependencies in our other projects ourselves.\nBundler was designed with this idea in mind, where you can build your own library or app without affecting the version numbers in your multiple projects. If you\u0026rsquo;re familiar with virtualenv, venv, pipenv or poetry in Python, then you\u0026rsquo;ll get the hang of it in no time.\nTo install bundler we follow the same procedure when installing any other gem.\ngem install bundler  After that, we\u0026rsquo;re able to use bundler with any application to install its requirements, for this we can go to the project folder and create a gemfile (or gemspec sometimes) and create a Gemfile.\nRequire gems in your Gemfile This Gemfile will contain the other libraries that we\u0026rsquo;re going to use to make our gem work. For this example we\u0026rsquo;re going to create a folder to hold a new project and then we\u0026rsquo;re creating a Gemfile to hold our dependencies.\n$ mkdir mygem $ cd mygem $ touch Gemfile  Now we have a Gemfile in our directory mygem, let\u0026rsquo;s fill it with a gem that we\u0026rsquo;re going to require to build our command line interface, this gem is Thor.\n# Gemfile source \u0026#34;https://rubygems.org\u0026#34; gem \u0026#39;thor\u0026#39;, \u0026#39;~\u0026gt; 0.20\u0026#39; What\u0026rsquo;s happening here? Well, the first line is going to tell Bundler that we\u0026rsquo;re going to require our gems from the server rubygems.org. The second line is telling Bundler to install the gem Thor.\nBut what\u0026rsquo;s that ~\u0026gt; doing there? It\u0026rsquo;s basically a way of saying \u0026ldquo;I want the highest version of thor between the range of \u0026gt;= 0.20 and \u0026lt; 1.0. This translates to the highest version of thor available since 0.20 but less than 1.0.\nThis is called the Ruby\u0026rsquo;s Pessimistic Operator, the twiddle-wakka or the spermy operator if you prefer it that way.\nNow that we have defined our requirements we can fetch the lastest gem versions available for us using the update command in Bundler.\n$ bundle update Fetching gem metadata from https://rubygems.org/. Resolving dependencies... Using bundler 1.16.4 Fetching thor 0.20.0 Installing thor 0.20.0 Bundle updated!  Now, that we have fetch the latest versions (within our version constrain define in the Gemfile, of course) we can proceed to install them using the install command.\n$ bundle install Using bundler 1.16.4 Using thor 0.20.0 Bundle complete! 1 Gemfile dependency, 2 gems now installed. Use `bundle info [gemname]` to see where a bundled gem is installed  This will generate a Gemfile.lock file that will pin our gem versions.\nGEM remote: https://rubygems.org/ specs: thor (0.20.0) PLATFORMS ruby DEPENDENCIES thor (~\u0026gt; 0.20) BUNDLED WITH 1.16.4 You can see here that there\u0026rsquo;s specifications about the version numbers of our dependencies, platforms, the remote server where we\u0026rsquo;re going to retrieve our gems and the bundler version.\nNow we\u0026rsquo;re able to use the thor gem in our library. To our surprise, this gem comes also with a command line interface (CLI) that we can use.\nTo execute it, we have to do it under the environment that bundler has prepared for us. We can execute it using the exec command in bundler.\n$ bundle exec thor Commands: thor help [COMMAND] # Describe available commands or one specific command thor install NAME # Install an optionally named Thor file into your system commands thor installed # List the installed Thor modules and commands thor list [SEARCH] # List the available thor commands (--substring means .*SEARCH) thor uninstall NAME # Uninstall a named Thor module thor update NAME # Update a Thor file from its original location thor version # Show Thor version  Awesome! Now we know how to define requirements and versions constrains within our application\u0026rsquo;s project, we can go and create a directory structure for our gem. But beware, developing gems is a bit different than developing applications with Ruby.\nCreating our project Project description What we\u0026rsquo;re going to create is a gem with a command line interface called DiceMyPass (or DMP, from now on). This gem will provide you with a secure passphrase extracted from EFF\u0026rsquo;s long wordlist with an optional length. For example:\n$ dmp gen - Passphrase: slashed uncharted evoke placard outweigh revision  Additionally, we\u0026rsquo;re going to add an option to our gen (read generate) command that will check if our newly generated passphrase was found on a dataset on HIBP to check if its vulnerable. For example:\n$ dmp gen --hibp - Passphrase: rockstar brunt stunt remindful astronaut bats - Password was not found in a dataset.  And lastly, we\u0026rsquo;re going to add a option to copy the new passphrase to the clipboard with the flag --clipboard.\nCreating a gem with bundler Now we\u0026rsquo;re going to get our hands dirty, you might think that we will have to create a directory structure and a Gemfile for our Ruby gem DMP, fortunately bundler got things covered for us and is able to scaffold one for us.\nAllow bundler to create a scaffold of your project using the gem command in bundler. It\u0026rsquo;s going to ask you a couple of questions, they\u0026rsquo;re all important but when it asks you about testing then you should write minitest, which is a gem that will help us to test the functionality of our gem.\n$ bundler gem dmp Creating gem 'dmp'... MIT License enabled in config Code of conduct enabled in config create dmp/Gemfile create dmp/lib/dmp.rb create dmp/lib/dmp/version.rb create dmp/dmp.gemspec create dmp/Rakefile create dmp/README.md create dmp/bin/console create dmp/bin/setup create dmp/.gitignore create dmp/.travis.yml create dmp/test/test_helper.rb create dmp/test/dmp_test.rb create dmp/LICENSE.txt create dmp/CODE_OF_CONDUCT.md Initializing git repo in /home/franccesco/workspace/dmp Gem 'dmp' was successfully created.  This will generate a directory structure and there\u0026rsquo;s a couple of files that requires your attention.\n   File Description     Gemfile Gemfile holding our application dependencies   dmp.gemspec Gemspec holding our gem dependencies   Rakefile Rake commands to handle our build cycle   CODE_OF_CONDUCT.md Code of Conduct to let people know how to contribute   LICENSE.txt License your project under the MIT license   .gitignore List of files excluded from version control (git)   lib/dmp.rb The file where we\u0026rsquo;re going to develop our gem   lib/dmp/version.rb Here we\u0026rsquo;re going to bump the version number of our gem    Obviously, there are others that are also important, but we\u0026rsquo;re going to see them in other posts.\nNow, we have to define the dependencies of our project, but hold on, we\u0026rsquo;re not using the Gemfile to define our dependencies in our gem, we\u0026rsquo;re using the .gemspec here.\nThis is because there\u0026rsquo;s a difference between developing a gem and developing an application. I\u0026rsquo;m not going through the details about the differences, you can find that in this excellent article made by Yehuda Katz.\nTo make it easier for you, just remember:\n When developing an app: Use the Gemfile. When developing a gem: Use the gemspec.  Let\u0026rsquo;s open up our gemspec and fill it with the necessary information, remember to replace the TODO\u0026rsquo;s with relevant information.\nlib = File.expand_path(\u0026#34;../lib\u0026#34;, __FILE__) $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib) require \u0026#34;dmp/version\u0026#34; Gem::Specification.new do |spec| spec.name = \u0026#34;dmp\u0026#34; spec.version = Dmp::VERSION spec.authors = [\u0026#34;Franccesco Orozco\u0026#34;] spec.email = [\u0026#34;franccesco@codingdose.info\u0026#34;] spec.summary = %q{Generate a secure passphrase.} spec.description = %q{Generates a passphrase using EFF\u0026#39;s long wordlist.} spec.homepage = \u0026#34;https://github.com/franccesco/dmp\u0026#34; spec.license = \u0026#34;MIT\u0026#34; # Specify which files should be added to the gem when it is released. # The `git ls-files -z` loads the files in the RubyGem that have been added into git. spec.files = Dir.chdir(File.expand_path(\u0026#39;..\u0026#39;, __FILE__)) do `git ls-files -z`.split(\u0026#34;\\x0\u0026#34;).reject { |f| f.match(%r{^(test|spec|features)/}) } end spec.bindir = \u0026#34;exe\u0026#34; spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) } spec.require_paths = [\u0026#34;lib\u0026#34;] # add dependencies spec.add_dependency \u0026#39;thor\u0026#39;, \u0026#39;~\u0026gt; 0\u0026#39; spec.add_dependency \u0026#39;colorize\u0026#39;, \u0026#39;~\u0026gt; 0.8\u0026#39; spec.add_dependency \u0026#39;clipboard\u0026#39;, \u0026#39;~\u0026gt; 1.1\u0026#39; # add dependencies specially for development needs spec.add_development_dependency \u0026#34;bundler\u0026#34;, \u0026#34;~\u0026gt; 1.16\u0026#34; spec.add_development_dependency \u0026#34;rake\u0026#34;, \u0026#34;~\u0026gt; 10.0\u0026#34; spec.add_development_dependency \u0026#34;irb\u0026#34;, \u0026#34;~\u0026gt; 1.0.0\u0026#34; spec.add_development_dependency \u0026#34;minitest\u0026#34;, \u0026#34;~\u0026gt; 5.0\u0026#34; spec.add_development_dependency \u0026#34;minitest-reporters\u0026#34;, \u0026#34;~\u0026gt; 1.3\u0026#34; end As you can see, we have define our dependencies. Thor to handle our command line interface, Colorize to print colored strings to the command line, and Clipboard to copy the output to our clipboard automatically.\nFor our development dependencies we have added minitest-reporters which displays a nice report of our tests right in the command line.\nAs you can see, they all have version constrains to avoid that our gem breaks when the dependencies are updated with major versions and incompatible changes. Let\u0026rsquo;s install our dependencies.\n$ bundle update Fetching gem metadata from https://rubygems.org/........ Resolving dependencies... Using rake 10.5.0 Using ansi 1.5.0 Using builder 3.2.3 Using bundler 1.16.4 Using clipboard 1.1.2 Using colorize 0.8.1 Using thor 0.20.0 Using dmp 0.1.0 from source at `.` Using minitest 5.11.3 Using ruby-progressbar 1.10.0 Using minitest-reporters 1.3.4 Bundle updated! $ bundle install Using rake 10.5.0 Using ansi 1.5.0 Using builder 3.2.3 Using bundler 1.16.4 Using clipboard 1.1.2 Using colorize 0.8.1 Using thor 0.20.0 Using dmp 0.1.0 from source at `.` Using minitest 5.11.3 Using ruby-progressbar 1.10.0 Using minitest-reporters 1.3.4 Bundle complete! 5 Gemfile dependencies, 11 gems now installed. Use `bundle info [gemname]` to see where a bundled gem is installed.  There you go, we have now installed our dependencies, and if you look closely, we have also installed our gem in development mode.\nUsing dmp 0.1.0 from source at `.`  After we have updated and installed our dependencies, it is important that we exclude the newly generated Gemfile.lock from version control, as we\u0026rsquo;re developing a gem, not an application, and we\u0026rsquo;re not trying to replicate our environment in another one.\n$ echo Gemfile.lock \u0026gt;\u0026gt; .gitignore  Now that we have excluded the lock file, let\u0026rsquo;s make a simple test before making our first commit.\nOpen up test/dmp_test.rb and let\u0026rsquo;s fill it with the following content.\nrequire \u0026#34;test_helper\u0026#34; require \u0026#39;dmp\u0026#39; class DmpTest \u0026lt; Minitest::Test def test_that_it_has_a_version_number refute_nil ::Dmp::VERSION end def test_say_hi assert_equal Dmp.say_hi(\u0026#39;Franccesco\u0026#39;), \u0026#39;Hello, Franccesco!\u0026#39; end end Now, we can see that we have 2 tests here. The first one test if our module has a VERSION number and if it doesn\u0026rsquo;t have this constant in our module then it should complain. We can check this out in the dmp module.\nmodule Dmp VERSION = \u0026#34;0.1.0\u0026#34; end Now for the other one, we have written a test that checks that the result of our module function say_hi('Franccesco') returns Hello, Franccesco!. As we haven\u0026rsquo;t written any modules yet, it will fail. Let\u0026rsquo;s respect the red, green, refactor cycle and let\u0026rsquo;s make it fail.\nFor this, let\u0026rsquo;s run rake test to begin our minitests.\n$ bundle exec rake test 1) Error: DmpTest#test_say_hi: NoMethodError: undefined method `say_hi\u0026#39; for Dmp:Module /home/franccesco/dmp/test/dmp_test.rb:10:in `test_say_hi\u0026#39; 2 runs, 1 assertions, 0 failures, 1 errors, 0 skips Minitest complains that it cannot find the method say_hi, this is because we haven\u0026rsquo;t created our module yet. Let\u0026rsquo;s create it right now on lib/dmp.rb to make it pass.\n# lib/dmp.rb require \u0026#34;dmp/version\u0026#34; module Dmp def self.say_hi(name) \u0026#34;Hello, #{name}!\u0026#34; end end There you go! We have written a simple module that takes a name as a parameter and returns a salute with your name of choice. Let\u0026rsquo;s run the test again.\n$ bundle exec rake test # Running: .. Finished in 0.000661s, 3027.9266 runs/s, 3027.9266 assertions/s. 2 runs, 2 assertions, 0 failures, 0 errors, 0 skips All good now! Our module returns no failures now, meaning that our module successfully returns a salute with our name. Now, our tests are fine, but we can make our minitest report more friendly!\nFriendlier reports with minitest-reports Our red-green-refactor cycle is not showing neither red or green yet, so let\u0026rsquo;s add that to our environment. We can modify the behavior and presentation of our test opening the test helper found in test/test_helper.rb.\nJust before, we added a development dependency in our gemspec, a gem called minitest-reporters which modifies the presentation in our reports, open test/test_helper.rb and spin up the minitest-reporter gem.\n$LOAD_PATH.unshift File.expand_path(\u0026#34;../../lib\u0026#34;, __FILE__) require \u0026#34;dmp\u0026#34; require \u0026#34;minitest/autorun\u0026#34; # add default progress bar to reports require \u0026#39;minitest/reporters\u0026#39; Minitest::Reporters.use! This will use the default progress bar reporter when we run our test. To try it out, let\u0026rsquo;s open up lib/dmp.rb and modify our code to make it fail.\nrequire \u0026#34;dmp/version\u0026#34; module Dmp def self.say_hi(name) \u0026#34;Hello, Palmer!\u0026#34; end end We now that our test is expecting to return another name, the one that we provide, but it will return Palmer instead, this way our test should represent a failed test with a red progress bar.\nThere we go! this step is not entirely necessary for the development of our projects but it\u0026rsquo;s surely a nice addition as it adds visual aid and also makes testing a lot more enjoyable for sure. Let\u0026rsquo;s make the test pass, shall we?\nLet\u0026rsquo;s fix our method say_hi so it returns our name instead of Palmer, you know how to do that ;). After that, our test should be green now.\nGreat! Now that we have setup our minitests correctly, we can move on and make a more robust test, one that actually will test the functionality of our project, but before we write those tests, it would be useful to explain how the HaveIBeenPwned API works.\nHaveIBeenPwned API In order to write the test, we have to learn how the HaveIBeenPwned API works first, and it is actually not difficult at all. As we only need to access a certain function of the API, we don\u0026rsquo;t need to learn every aspect of the API itself, only the one to check if our password is vulnerable.\nYou can find the documentation of this aspect of the API clicking here, but let me give you an overview of how it works and what kind of requests we can submit.\nLet\u0026rsquo;s say that our password is \u0026lsquo;passw0rd\u0026rsquo;, but if we want to check this password through the API, we cannot submit the password in clear text as this would be an insecure practice. Instead we have to submit a partial hash encoded in SHA-1 which are the first 5 characters of the hash and then submit it to:\nhttps://api.pwnedpasswords.com/range/{first 5 hash chars} This will return a list of suffixes of all the hashes that matches the first 5 characters of our hash, followed by a count of how many that hash was found in vulnerable datasets. Here are the simplified steps:\n We encode our \u0026lsquo;passw0rd\u0026rsquo; string to SHA-1 which would be 7C6A61C68EF8B9B6B061B28C348BC1ED7921CB53 We submit the first 5 characters of our hash 7C6A6 to the API, which would be https://api.pwnedpasswords.com/range/7C6A6 It will return a long list of hashes, we only need to find the suffix of our hash, which would be 1C68EF8B9B6B061B28C348BC1ED7921CB53  Example:\n# If the password is secure, we wouldn't be able # to find the suffix of the hash here. # This is clearly not the case. -- SNIP -- 1BC4E6F00BECB5998201277DC62F89E08B0:7 1BC5AF255E721AF1C4AA83FD0F8EE8A79B8:3 1C68EF8B9B6B061B28C348BC1ED7921CB53:216221 \u0026lt;\u0026lt;- Here 1CEA692E5FA3ED23B956839B4B8BFCCC5F5:4 1DC4A0F7305069370733B17882579EBDF4E:3 -- SNIP -- This method is called the K-Anonymity model, and as we can see, the password \u0026lsquo;passw0rd\u0026rsquo; was found in 216221 datasets. This makes it obvious that this is not a secure password at all, so let\u0026rsquo;s implement this functionality into our code, shall we?\nGenerating a secure passphrase As already stated in our project description, we\u0026rsquo;re going to create a single gem that generates a passphrase and also it checks if the generated password is a known vulnerable password.\nRight now, we\u0026rsquo;re going to delete our previous say_hi test and we\u0026rsquo;re going to create three more empty tests that we will fill-out eventually. Here\u0026rsquo;s how the tests should look:\nrequire \u0026#34;test_helper\u0026#34; require \u0026#39;dmp\u0026#39; class DmpTest \u0026lt; Minitest::Test # def setup; end def test_that_it_has_a_version_number refute_nil ::Dmp::VERSION end # def test_gen_passphrase; end # def test_vulnerable_pass; end # def test_secure_pass; end end Now, we can see that we have three more tests and a setup method:\n First, we\u0026rsquo;re going to test if our module (dmp) generates a secure passphrase. We\u0026rsquo;re going to check if our generated passphrase is secure enough. Lastly, We\u0026rsquo;re going to test if our program alerts us if a password is vulnerable.  Let\u0026rsquo;s fill-out test_gen_passphrase first. As we know, when we create tests we create expectations. Following this idea, we\u0026rsquo;re going to define how our program should create our secure passphrase. Our \u0026ldquo;secure\u0026rdquo; passphrase for these tests will be \u0026ldquo;coding dose dot com\u0026rdquo; in the meantime.\ndef test_gen_passphrase # gen_passphrase should generate and respect prassphrase length passphrase3 = Dmp.gen_passphrase(3) passphrase_default = Dmp.gen_passphrase passphrase12 = Dmp.gen_passphrase(12) assert_equal passphrase3.length, 3, \u0026#39;Passphrase length != 3\u0026#39; assert_equal passphrase_default.length, 7, \u0026#39;Passphrase length != 7\u0026#39; assert_equal passphrase12.length, 12, \u0026#39;Passphrase length != 12\u0026#39; end There\u0026rsquo;s a lot going on here, huh? No worries though, this is more of the same. Let\u0026rsquo;s analyze the first variable declaration and the first assertion:\n# Here we generate a passphrase which will consist on three words passphrase3 = Dmp.gen_passphrase(3) -- SNIP -- # Now we check if the passphrase previously created has a length of three words. assert_equal passphrase3.length, 3, \u0026#39;Passphrase length != 3\u0026#39; If the passphrase is not equal to three words then it complains with the message \u0026lsquo;Passphrase length != 3\u0026rsquo;. Let\u0026rsquo;s run this test.\nNoMethodError: undefined method `gen_passphrase\u0026#39; for Dmp:Module Our first expectation did not run correctly, but of course, this is what we\u0026rsquo;re looking for. So as there\u0026rsquo;s no method named gen_passphrase in our module yet, we\u0026rsquo;ll have to create it first. But before we do that, we\u0026rsquo;ll have to look for a dictionary that gives us a list of words which we\u0026rsquo;ll use to generate a passphrase consisting in 3, 4, 8, 100 words if we need to. You can find the list here: eff_long_wordlist.txt\nNow, let\u0026rsquo;s save this file in lib/dmp/assets/eff_long_wordlist.txt.\n$ mkdir lib/dmp/assets $ wget -O lib/dmp/assets/eff_long_wordlist.txt https://raw.githubusercontent.com/franccesco/dmp/master/lib/dmp/assets/eff_long_wordlist.txt Now that we have saved the dictionary in the assets folder, we can open up our module and code the core functionality of our gem, which is to generate a secure passphrase using this dictionary.\nmodule Dmp # First we load the absolute path of our eff_long_wordlist.txt. @eff_wordlist = File.dirname(__FILE__) + \u0026#39;/dmp/assets/eff_long_wordlist.txt\u0026#39; # The default passphrase length should be 7 def self.gen_passphrase(pass_length = 7) # Read filename eff_long_wordlist and save it as a list. wordlist = File.readlines(@eff_wordlist) # Strip the \u0026#39;\\n\u0026#39; out of every line. wordlist.map(\u0026amp;:strip!) # Shuffle the list and return a list up to pass_length words # which in the case would be equal to 7 words. wordlist.shuffle[0...pass_length] end end This should be pretty easy:\n We create an instance variable holding the absolute path of our wordlist Define our method gen_passphrase with a default length of 7 Load our wordlist as a list and hold it in the wordlist variable. Strip every \u0026lsquo;\\n\u0026rsquo; in each word Lastly, we scramble the words and return a list of words with our desired length.  Let\u0026rsquo;s test it out:\nStarted with run options --seed 2535 2/2: [==========================================================================] 100% Time: 00:00:00, Time: 00:00:00 Finished in 0.01531s 2 tests, 4 assertions, 0 failures, 0 errors, 0 skips Awesome! Our code works! But how can we actually check how it works? We\u0026rsquo;ll we can definitely import it into IRB and try it out with bundle exec irb:\nirb(main):001:0\u0026gt; require \u0026#39;dmp\u0026#39; =\u0026gt; true irb(main):002:0\u0026gt; Dmp.gen_passphrase =\u0026gt; [\u0026#34;autopilot\u0026#34;, \u0026#34;ivy\u0026#34;, \u0026#34;overlay\u0026#34;, \u0026#34;down\u0026#34;, \u0026#34;visitor\u0026#34;, \u0026#34;prenatal\u0026#34;, \u0026#34;flirt\u0026#34;] irb(main):003:0\u0026gt; Dmp.gen_passphrase(3) =\u0026gt; [\u0026#34;roulette\u0026#34;, \u0026#34;earthen\u0026#34;, \u0026#34;garbage\u0026#34;] irb(main):004:0\u0026gt; Dmp.gen_passphrase(6) =\u0026gt; [\u0026#34;tartly\u0026#34;, \u0026#34;happier\u0026#34;, \u0026#34;juice\u0026#34;, \u0026#34;itunes\u0026#34;, \u0026#34;job\u0026#34;, \u0026#34;eastward\u0026#34;] You see? Now we can create a secure passphrase with our method. Let\u0026rsquo;s move onto the next test.\nChecking passphrases with HIBP API Let\u0026rsquo;s write the test of our method to check if a passphrase or password is vulnerable (read found in a dataset).\ndef test_vulnerable_pass # check_pwned should flag this password vuln_count = Dmp.check_pwned(\u0026#39;passw0rd\u0026#39;) refute_nil vuln_count end Let\u0026rsquo;s analyze the first piece of code.\nvuln_count = Dmp.check_pwned(\u0026#39;passw0rd\u0026#39;) It gets clearer if we read this code in reverse. We have the password 'passw0rd' that we want to check if it\u0026rsquo;s unsafe using the method check_pwned that belongs to the module Dmp and we want to hold the value or output of this action to the variable vuln_count. This output should tell us in how many datasets was the password found.\nassert_nil vuln_count As our password is not secure at all, then the value of vuln_count should not be nil, this is because we out method check_pwned should find the suffix of our hash in the HaveIBeenPwned (HIBP) datasets. With this in mind, we test the the value of vuln_count should not be nil with refute_nil. Let\u0026rsquo;s run the test.\nNoMethodError: undefined method `check_pwned\u0026#39; for Dmp:Module As there\u0026rsquo;s no method named check_pwned then our test complains, let\u0026rsquo;s open up the module dmp.rb and fill-out the code.\nNoMethodError: undefined method `check_pwned\u0026#39; for Dmp:Module No method? no problem, let\u0026rsquo;s fill it out in our module.\nrequire \u0026#34;dmp/version\u0026#34; # require SHA-1 digest and http utilities require \u0026#39;digest/sha1\u0026#39; require \u0026#39;net/http\u0026#39; mod module Dmp # -- CODE SNIPPED -- def self.check_pwned(passphrase) # If the passphrase is an array generated by gen_passphrase we convert # the passphrase to an unified string, if it\u0026#39;s a string already then # no changes are applied to the passphrase variable. passphrase = passphrase.join(\u0026#39; \u0026#39;) if passphrase.is_a?(Array) # We encode our passphrase to SHA-1, and save or prefix consisting # in 5 characters to the variable sha1_excerpt and the suffix to # the variable sha1_to_look_for. sha1_pass = Digest::SHA1.hexdigest(passphrase) sha1_excerpt = sha1_pass[0...5] sha1_to_look_for = sha1_pass[5..-1] # We make the API call with our SHA-1 prefix and store the response to # the variable api_request api_url = URI(\u0026#34;https://api.pwnedpasswords.com/range/#{sha1_excerpt}\u0026#34;) api_request = Net::HTTP.get(api_url) # The response is text instead of JSON, needs to format the response # to a dictionary so the rest of the hash can be located easier. # =\u0026gt; String \u0026#39;0018A45C4D1DEF81644B54AB7F969B88D65:21\u0026#39; # =\u0026gt; Array [\u0026#39;0018A45C4D1DEF81644B54AB7F969B88D65:21\u0026#39;, ...] # =\u0026gt; 2D Array [[\u0026#39;0018A45C4D1DEF81644B54AB7F969B88D65\u0026#39;, \u0026#39;21\u0026#39;], ...] # =\u0026gt; Hash {\u0026#39;0018A45C4D1DEF81644B54AB7F969B88D65\u0026#39;: 21, ...} striped_list = api_request.split(\u0026#34;\\r\\n\u0026#34;) pass_list = striped_list.map { |hash| hash.split(\u0026#39;:\u0026#39;) } hash_list = Hash[*pass_list.flatten!] hash_list[sha1_to_look_for.upcase] end end Now there\u0026rsquo;s a lot going on here, let\u0026rsquo;s simplify the steps:\n The passphrase should be a string, as in roulette earthen garbage\u0026rdquo;, but if the password was generated by gen_passphrase then it will return an array, only in that case we convert it to a string. We use the digest module to convert our string to SHA-1, we save the 5 characters prefix to sha_excerpt and the suffix to sha1_to_look_for. We perform a GET request to the API using the prefix of our hash. The response should be a text list as we saw previously. We have to format that list in a way that we can search for our hash suffix and have the value of how many times the password is found in datasets.  And this is the slightly tricky part, when we receive the response from the API, it doesn\u0026rsquo;t provide us with a pretty JSON response which we can work on, it provides us with a bare list like this:\n003E7C1C94342454421573ADECD156C6AE8:2\\r\\n00A4DB094C56008C81D9DA2C55166F1A5BA:4\\r\\n00F042A842B821E2F727B0A4A3C0555E4A0:2\\r\\n01F14311110773C8064336D0D52736141D2:3\\r\\n01F6581B8152E00CBA4F8261335A78DA26F:1\\r\\n020290C96F182C924647A747F21681697B9:2\\r\\n02146D9588F55A6751CE580AA1AC6E16106:2\\r\\n02C2409C5E2AAC99D2937CAB31EB4677EAD:2\\r\\n02EFB814079D668ACF7308FAA18583D8CED:2\\r\\n033211C0B3B8B0EBC0BFDF2000CE0FFA166:1\\r\\n0378E7D9BC61CE282E9664D404505F66457:1\\r\\n03D801A3E713009943D0A76217278ABE2DD:3\\r\\n0412EEBFCB315371F4CDEAEB3AFDBEA43CD:1... Pretty, right? (Sarcasm intended) now, I\u0026rsquo;m sure that there are better ways to convert this mess into a dictionary, but for clarity and brevity we will not get into a regex mind-boggling tricks right now. Let\u0026rsquo;s try this piece of code in IRB. First, let\u0026rsquo;s remove all the \\r\\n\u0026rsquo;s that we can find with striped_list = api_request.split(\u0026quot;\\r\\n\u0026quot;).\nirb(main):009:0\u0026gt; striped_list = api_request.split(\u0026#34;\\r\\n\u0026#34;) =\u0026gt; [\u0026#34;003E7C1C94342454421573ADECD156C6AE8:2\u0026#34;, \u0026#34;00A4DB094C56008C81D9DA2C55166F1A5BA:4\u0026#34;, \u0026#34;00F042A842B821E2F727B0A4A3C0555E4A0:2\u0026#34;, \u0026#34;01F14311110773C8064336D0D52736141D2:3\u0026#34;, \u0026#34;01F6581B8152E00CBA4F8261335A78DA26F:1\u0026#34;, \u0026#34;020290C96F182C924647A747F21681697B9:2\u0026#34;, \u0026#34;02146D9588F55A6751CE580AA1AC6E16106:2\u0026#34;, \u0026#34;02C2409C5E2AAC99D2937CAB31EB4677EAD:2\u0026#34;, \u0026#34;02EFB814079D668ACF7308FAA18583D8CED:2\u0026#34;, \u0026#34;033211C0B3B8B0EBC0BFDF2000CE0FFA166:1\u0026#34;, \u0026#34;...\u0026#34;] Good, now let\u0026rsquo;s map that list and create a 2D Array:\nirb(main):010:0\u0026gt; pass_list = striped_list.map { |hash| hash.split(\u0026#39;:\u0026#39;) } =\u0026gt; [[\u0026#34;003E7C1C94342454421573ADECD156C6AE8\u0026#34;, \u0026#34;2\u0026#34;], [\u0026#34;00A4DB094C56008C81D9DA2C55166F1A5BA\u0026#34;, \u0026#34;4\u0026#34;], [\u0026#34;00F042A842B821E2F727B0A4A3C0555E4A0\u0026#34;, \u0026#34;2\u0026#34;], [\u0026#34;01F14311110773C8064336D0D52736141D2\u0026#34;, \u0026#34;3\u0026#34;], [\u0026#34;...\u0026#34;], # This individualizes our suffixes: irb(main):011:0\u0026gt; pass_list[0] =\u0026gt; [\u0026#34;003E7C1C94342454421573ADECD156C6AE8\u0026#34;, \u0026#34;2\u0026#34;] irb(main):013:0\u0026gt; pass_list[15] =\u0026gt; [\u0026#34;04BC55FD524B3E42D6A732E2EA8076A9178\u0026#34;, \u0026#34;5\u0026#34;] Perfect\u0026hellip; well not quite so, let\u0026rsquo;s create a dictionary out of the 2D array.\nirb(main):014:0\u0026gt; hash_list = Hash[*pass_list.flatten!] =\u0026gt; {\u0026#34;003E7C1C94342454421573ADECD156C6AE8\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;00A4DB094C56008C81D9DA2C55166F1A5BA\u0026#34;=\u0026gt;\u0026#34;4\u0026#34;, \u0026#34;00F042A842B821E2F727B0A4A3C0555E4A0\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;01F14311110773C8064336D0D52736141D2\u0026#34;=\u0026gt;\u0026#34;3\u0026#34;, \u0026#34;01F6581B8152E00CBA4F8261335A78DA26F\u0026#34;=\u0026gt;\u0026#34;1\u0026#34;, \u0026#34;020290C96F182C924647A747F21681697B9\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;02146D9588F55A6751CE580AA1AC6E16106\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;02C2409C5E2AAC99D2937CAB31EB4677EAD\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;02EFB814079D668ACF7308FAA18583D8CED\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;033211C0B3B8B0EBC0BFDF2000CE0FFA166\u0026#34;=\u0026gt;\u0026#34;1\u0026#34;, \u0026#34;0378E7D9BC61CE282E9664D404505F66457\u0026#34;=\u0026gt;\u0026#34;1\u0026#34;, \u0026#34;03D801A3E713009943D0A76217278ABE2DD\u0026#34;=\u0026gt;\u0026#34;3\u0026#34;, \u0026#34;0412EEBFCB315371F4CDEAEB3AFDBEA43CD\u0026#34;=\u0026gt;\u0026#34;1\u0026#34;, \u0026#34;0422590C0BC43132207FF55FD78717074A4\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;04487E63244F1E2E868870AF5AE42ED8F1D\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;04BC55FD524B3E42D6A732E2EA8076A9178\u0026#34;=\u0026gt;\u0026#34;5\u0026#34;, \u0026#34;051394B2B64EF899A10064E2A068924A46C\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;05AB0063CC2A0C1B857329D914932DF7C5B\u0026#34;=\u0026gt;\u0026#34;1\u0026#34;, \u0026#34;06AD55DDE7997263212B916CDA2D9439924\u0026#34;=\u0026gt;\u0026#34;4\u0026#34;, \u0026#34;083C47463AAF42031B31DDA54E2F68DC807\u0026#34;=\u0026gt;\u0026#34;1\u0026#34;, \u0026#34;08765B6BDFAF683851AF48258A042D591C1\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;099FC9301DB35018687F5BEB5254530020A\u0026#34;=\u0026gt;\u0026#34;2\u0026#34;, \u0026#34;09B30BF127F929D1D9CD946E84C7F7E8FBF\u0026#34;=\u0026gt;\u0026#34;4\u0026#34;, \u0026#34;09D44DA6F15D940BFB19315A0C54CEAECBF\u0026#34;=\u0026gt;\u0026#34;5\u0026#34;, \u0026#34;0A649886EE897919604D2D8F35384ECC90F\u0026#34;=\u0026gt;\u0026#34;3\u0026#34;, \u0026#34;0C720EC0E1BED69EE7DE19C3EA4326E3DFF\u0026#34;=\u0026gt;\u0026#34;7\u0026#34;, \u0026#34;0C9279D46756FDA6911146D2245A013C4F4\u0026#34;=\u0026gt;\u0026#34;3\u0026#34;, \u0026#34;...\u0026#34;, Isn\u0026rsquo;t that better? Now we can search the suffix of our hash within the variable hash_list effortlessly:\nirb(main):015:0\u0026gt; hash_list[sha1_to_look_for.upcase] =\u0026gt; \u0026#34;216221\u0026#34; It\u0026rsquo;s working! And right now that\u0026rsquo;s what we need to know, either way we can refactor it anytime later. Let\u0026rsquo;s load our module into IRB and let\u0026rsquo;s check some passwords.\nirb(main):001:0\u0026gt; require \u0026#39;dmp\u0026#39; =\u0026gt; true irb(main):002:0\u0026gt; Dmp.check_pwned(\u0026#39;passw0rd\u0026#39;) =\u0026gt; \u0026#34;216221\u0026#34; irb(main):003:0\u0026gt; Dmp.check_pwned(\u0026#39;iloveyou\u0026#39;) =\u0026gt; \u0026#34;1593388\u0026#34; irb(main):004:0\u0026gt; Dmp.check_pwned(\u0026#39;coding dose dot com\u0026#39;) =\u0026gt; nil As you can see, the password \u0026lsquo;passw0rd\u0026rsquo; was found in 216221 dictionaries (or leaks), iloveyou was found in 1593388 and, fortunately, coding dose dot com was not found in any dataset, which makes it as secure against dictionary attacks at the moment.\nAsserting secure passwords This is the last test unit we\u0026rsquo;re going to perform here. More precisely we are going to check if a password is secure, we know that coding dose dot com is secure enough so we can test it and our gem should not flag this password as insecure. Let\u0026rsquo;s fill out the test right now.\ndef test_secure_pass # check_pwned should not flag this passphrase vuln_count = Dmp.check_pwned(\u0026#39;iloveyou\u0026#39;) assert_nil vuln_count end What we\u0026rsquo;re doing here is pretty much the same as with our previous test and the keyword refute_nil, but this time vuln_count should be nil because our secure password should not be found in any dataset on the HIBP API. Right now we want to see the test fail, so we have set the password as \u0026lsquo;iloveyou\u0026rsquo;.\n$ bundle exec rake test # --- SNIP --- Expected # encoding: ASCII-8BIT \u0026#34;1593388\u0026#34; to be nil. # --- SNIP --- There we see our test failing, now let\u0026rsquo;s make it pass changing the password to \u0026lsquo;coding dose dot com\u0026rsquo; and we should see that our test pass without any issue.\n$ bundle exec rake test Started with run options --seed 24537 4/4: [==========================================================================] 100% Time: 00:00:00, Time: 00:00:00 Finished in 0.53177s 4 tests, 6 assertions, 0 failures, 0 errors, 0 skips Refactoring our tests There\u0026rsquo;s one issue with our test that can be very easily fixed. Let\u0026rsquo;s take a closer look at the last test.\nclass DmpTest \u0026lt; Minitest::Test # -- CODE SNIPPED -- def test_secure_pass # check_pwned should not flag this passphrase vuln_count = Dmp.check_pwned(\u0026#39;coding dose dot com\u0026#39;) assert_nil vuln_count end end If there\u0026rsquo;s one issue that bothers me, is that we\u0026rsquo;re not actually testing the core functionality of our gem. Why are we testing for a hard coded passphrase (\u0026lsquo;coding dose dot com\u0026rsquo;) if our gem CAN generate one for us? We could even change our test like this and it could work perfectly fine!\nclass DmpTest \u0026lt; Minitest::Test # -- CODE SNIPPED -- def test_secure_pass # check_pwned should not flag this passphrase safe_pass = Dmp.gen_passphrase(7) vuln_count = Dmp.check_pwned(safe_pass) assert_nil vuln_count end end This is perfectly fine, however, what about if we make more tests and we need to generate more passwords? We would have to generate a new passphrase in every test and this would affect performance and it wouldn\u0026rsquo;t be DRY code.\nIn order to avoid repetition and make our lives way easier, we can write a setup method, in which we can create an instance variable that holds our generated secure password, along with an insecure one, in order to reuse it in many tests.\nclass DmpTest \u0026lt; Minitest::Test def setup @unsafe_pass = \u0026#39;passw0rd\u0026#39; @safe_pass = Dmp.gen_passphrase(pass_length = 12) end # -- CODE SNIPPED -- def test_vulnerable_pass # check_pwned should flag this password vuln_count = Dmp.check_pwned(@unsafe_pass) refute_nil vuln_count end def test_secure_pass # check_pwned should not flag this passphrase vuln_count = Dmp.check_pwned(@safe_pass) assert_nil vuln_count end end Now if we run our tests everything should work correctly and we can reuse this passwords without having to declare them in each test.\nCommand Line Interface (CLI) and publishing our gem in RubyGems Creating our CLI with Thor Now that we have created the core functions of our gem, we can actually create a nice CLI interface. Luckily, this is made easier with the framework called Thor.\n Thor is a toolkit for building powerful command-line interfaces. It is used in Bundler, Vagrant, Rails and others.\n I invite you to go into the website and read a bit about Thor so you understand how the framework works, but don\u0026rsquo;t worry, it\u0026rsquo;s actually pretty simple and straight forward. To begin with, we\u0026rsquo;re going to create the file in which we\u0026rsquo;re going to construct our CLI.\n# Create the filename cli.rb under lib/dmp/ $ touch lib/dmp/cli.rb Now that we have created our file, let\u0026rsquo;s recall our project description:\n We want our gem to generate a passphrase. We want our gem to check this passphrase with the HIBP to test if it\u0026rsquo;s secure. And we also want an option to automatically copy this passphrase to the clipboard.  Ideally, we want to recreate this behavior:\n$ dmp gen 4 --hibp --clipboard - Passphrase: cobweb desolate pushy mulled - Copied to clipboard. - Password was not found in a dataset. Now, I love colors in my terminal, so we\u0026rsquo;ll add colors to our output, in fact, I want to have every word in the passphrase with a random color. But let\u0026rsquo;s not get ahead of ourselves and add a little bit of code to our CLI to get started, open the file.\n# lib/dmp/cli.rb require \u0026#39;thor\u0026#39; require \u0026#39;dmp\u0026#39; require \u0026#39;colorize\u0026#39; require \u0026#39;clipboard\u0026#39; module Dmp class CLI \u0026lt; Thor end end This is the bare minimum that we need to get our CLI started, however, when we try to execute our gem in a terminal using bundler, it doesn\u0026rsquo;t detect it. This is because we need to define our executable file in our gem before we proceed.\n# Create the file exe/dmp $ mkdir exe $ touch exe/dmp # Make dmp executable $ chmod +x exe/dmp Please, notice that our dmp file does not have any extension, this is intended. Let\u0026rsquo;s open this file and fill it with some code.\n#!/usr/bin/env ruby require \u0026#39;dmp/cli\u0026#39; Dmp::CLI.start In order for bundler to detect our new executable, we must integrate it into our version control with git.\n$ git add . $ git commit -am \u0026#34;Add CLI with Thor\u0026#34; After that, let\u0026rsquo;s update our gems\n# the command \u0026#39;bundle\u0026#39; automatically updates and install your gems $ bundle Using rake 10.5.0 Using ansi 1.5.0 Using builder 3.2.3 Using bundler 1.17.2 Using clipboard 1.3.3 Using colorize 0.8.1 Using thor 0.20.3 Using dmp 0.1.0 from source at `.` Using irb 1.0.0 Using minitest 5.11.3 Using ruby-progressbar 1.10.0 Using minitest-reporters 1.3.6 Bundle complete! 6 Gemfile dependencies, 12 gems now installed. Use `bundle info [gemname]` to see where a bundled gem is installed. And finally we can see that our CLI comes to life.\n$ bundle exec dmp Commands: dmp help [COMMAND] # Describe available commands or one specific command But we\u0026rsquo;re missing all the functionality that we want to create though, also there\u0026rsquo;s no description on what our program do, let\u0026rsquo;s try to fix this by adding a description and also three tasks to our CLI.\nThese tasks will be gen_pass which we\u0026rsquo;ll use to generate passphrases, check_pass that will check if a passphrase or password is found in a HIBP dataset and lastly an about task which will be used to describe information about the program and the author of it, alright? Let\u0026rsquo;s do it.\nrequire \u0026#39;thor\u0026#39; require \u0026#39;dmp\u0026#39; require \u0026#39;colorize\u0026#39; require \u0026#39;clipboard\u0026#39; module Dmp class CLI \u0026lt; Thor desc \u0026#39;gen [length]\u0026#39;, \u0026#39;Generate a passphrase of the desired length.\u0026#39; def gen_pass; end desc \u0026#39;check\u0026#39;, \u0026#39;Check if a password/passphrase is vulnerable.\u0026#39; def check_pass; end desc \u0026#39;about\u0026#39;, \u0026#39;Displays version number and information\u0026#39; def about; end end end Now, we have added empty tasks, and that\u0026rsquo;s O.K. for now, we will fill them one by one, but if we execute our gem we will see that new tasks were added.\n$ bundle exec dmp Commands: dmp about # Displays version number and information dmp check # Check if a password/passphrase is vulnerable. dmp gen [length] # Generate a passphrase of the desired length. dmp help [COMMAND] # Describe available commands or one specific command Let\u0026rsquo;s code the task gen first. I want to add two options to the gen command, when I generate a passphrase of my desired length, I want to automatically copy the new passphrase to the clipboard, and also I want to check if the passphrase shows up in a HIBP database, which would make it insecure. We can add these optional functionality to our gen command using the keyword method_option.\nrequire \u0026#39;thor\u0026#39; require \u0026#39;dmp\u0026#39; require \u0026#39;colorize\u0026#39; require \u0026#39;clipboard\u0026#39; module Dmp class CLI \u0026lt; Thor desc \u0026#39;gen [length]\u0026#39;, \u0026#39;Generate a passphrase of the desired length.\u0026#39; method_option :clipboard, aliases: \u0026#39;-c\u0026#39;, type: :boolean, desc: \u0026#39;Copy passphrase to clipboard.\u0026#39; method_option :hibp, aliases: \u0026#39;-H\u0026#39;, type: :boolean, desc: \u0026#39;Check if passphrase is vulnerable in HIBP database.\u0026#39; def gen_pass(pass_length = 7) end end end We can see that we have added the option clipboard which will be activated using the alias -c, this option is boolean which will be treated as a flag and has it\u0026rsquo;s own description, the same goes to the hibp option. We can see this reflected when we check the help command in our gem.\n$ bundle exec dmp help gen Usage: dmp gen [length] Options: -c, [--clipboard], [--no-clipboard] # Copy passphrase to clipboard. -H, [--hibp], [--no-hibp] # Check if passphrase is vulnerable in HIBP database. Generate a passphrase of the desired length. module Dmp class CLI \u0026lt; Thor # -- CODE SNIPPED -- def gen_pass(pass_length = 7) # Force our length value to be an integer new_passphrase = Dmp.gen_passphrase(pass_length.to_i) # If our options :clipboard and :hibp are true, then proceeds # to copy the contents of the passphrase to the clipboard # and check if the passphrase is vulnerable. Clipboard.copy(new_passphrase.join(\u0026#39; \u0026#39;)) if options[:clipboard] dataset_count = Dmp.check_pwned(new_passphrase) if options[:hibp] # To add colors, first we store the available colors in a variable # but we eliminate the color black which makes some words to be # unreadable in the terminal. After that, we map the passphrase list # and assign a random color to each one of them. colors = String.colors colors.delete(:black) # black color looks ugly in the terminal new_passphrase.map! do |phrase| random_color = colors.sample phrase.colorize(random_color) end # We add default messages in case our options are activated. A green bold # message when the user wants to copy the passphrase, another one if the # passphrase is safe, and a red bold one if the passphrase is vulnerable. copy_msg = \u0026#39;- Copied to clipboard.\u0026#39;.bold.green vuln_pass_msg = \u0026#34;- WARNING: Passphrase appears in #{dataset_count}datasets!\u0026#34;.red.bold safe_pass_msg = \u0026#39;- Password was not found in a dataset.\u0026#39;.green.bold # Bold the title passphrase and then join the passphrase to make it a string. puts \u0026#39;- Passphrase: \u0026#39;.bold + new_passphrase.join(\u0026#39; \u0026#39;) # If the clipboard option is active then display the clipboard message puts copy_msg if options[:clipboard] # If the option :hibp is True then check if the pass is found in a dataset. # If dataset_cout is not nil then display vuln_pass_msg, else display # the safe_pass_msg. puts dataset_count ? vuln_pass_msg : safe_pass_msg if options[:hibp] end # -- CODE SNIPPED -- end end Looks a little bit long because all of the comments there, I only wrote them from clarity but you can leave them behind so you don\u0026rsquo;t have your code so cluttered. The good news is that we have completed our first task! It should be executed flawlessly. Let\u0026rsquo;s check it out.\n$ bundle exec dmp gen - Passphrase: molecular lubricate press net plank crook subpanel $ bundle exec dmp gen 3 - Passphrase: preppy dividing epidural $ bundle exec dmp gen 3 -c -H - Passphrase: jaunt nurture reason - Copied to clipboard. - Password was not found in a dataset. $ bundle exec dmp gen 1 -H - Passphrase: capacity - WARNING: Passphrase appears in 1879 datasets! Awesome! Our gem works as expected, with all of the functionality we intended for it. Now, let\u0026rsquo;s fill out the next task, which would be check_pass. This task will not generate a passphrase, instead it will check for a password or passphrase that we currently have.\nmodule Dmp class CLI \u0026lt; Thor # -- CODE SNIPPED -- desc \u0026#39;check\u0026#39;, \u0026#39;Check if a password/passphrase is vulnerable.\u0026#39; def check_pass puts \u0026#34;Enter your password, press ENTER when you\u0026#39;re done.\u0026#34; password = ask(\u0026#39;Password (hidden):\u0026#39;.yellow, echo: false) (puts \u0026#34;Aborted.\u0026#34;.red.bold; exit) if password.empty? dataset_count = Dmp.check_pwned(password) vuln_msg = \u0026#34;Your password appears in #{dataset_count}datasets!\u0026#34;.red.bold safe_msg = \u0026#34;Your password was not found in a dataset.\u0026#34;.green.bold puts dataset_count ? vuln_msg : safe_msg end # -- CODE SNIPPED -- end end As we can see, this task is very simple and short, one thing that we can highlight is the fact that we can ask for a password without disclose it in the terminal, for security reasons of course. We can do this with the ask method followed by echo: false if we want to turn off the echo when people type in.\nNow that our check_pass method is finished, let\u0026rsquo;s try it out on the console.\n$ bundle exec dmp check Enter your password, press ENTER when you\u0026#39;re done. Password (hidden): Your password was not found in a dataset. $ bundle exec dmp check Enter your password, press ENTER when you\u0026#39;re done. Password (hidden): Your password appears in 15996 datasets! There we go! For the final task, let\u0026rsquo;s fill in the about task. For old times sake (and a little bit of cockiness) we\u0026rsquo;ll add a little banner when we call the about task.\nmodule Dmp class CLI \u0026lt; Thor # -- CODE SNIPPED -- desc \u0026#39;about\u0026#39;, \u0026#39;Displays version number and information\u0026#39; def about puts Dmp::BANNER.bold.red puts \u0026#39;version: \u0026#39;.bold + Dmp::VERSION.green puts \u0026#39;author: \u0026#39;.bold + \u0026#39;@__franccesco\u0026#39;.green puts \u0026#39;homepage: \u0026#39;.bold + \u0026#39;https://github.com/franccesco/dmp\u0026#39;.green puts \u0026#39;learn more: \u0026#39;.bold + \u0026#39;https://codingdose.info\u0026#39;.green puts # extra line, somehow I like them. end # -- CODE SNIPPED -- end end We can see that we display the banner first, and a few details about ourselves and the program there like version, homepage, author and you can add as much as you want. But we haven\u0026rsquo;t declared a BANNER yet. So let\u0026rsquo;s do that, shall we? Let\u0026rsquo;s open up our version file in lib/dmp/version.rb and add our banner there.\nmodule Dmp VERSION = \u0026#34;0.1.0\u0026#34; BANNER = \u0026#39;\u0026#39;\u0026#39; ____ __ __ ____ | _ \\ | \\/ | | _ \\ | | | | | |\\/| | | |_) | | |_| | | | | | | __/ |____/ |_| |_| |_| \u0026#39;\u0026#39;\u0026#39; end There we go, now let\u0026rsquo;s test our about task.\n$ bundle exec dmp about ____ __ __ ____ | _ \\  | \\/ | | _ \\ | | | | | |\\/| | | |_) | | |_| | | | | | | __/ |____/ |_| |_| |_| version: 0.1.0 author: @__franccesco homepage: https://github.com/franccesco/dmp learn more: https://codingdose.info Cool, right?! Now we have a fully functional gem! Let\u0026rsquo;s save our final changes into git.\n$ git commit -am \u0026#34;Complete CLI tasks\u0026#34; [develop ba6f18e] Complete CLI tasks 2 files changed, 57 insertions(+), 4 deletions(-) Managing versions with gem-release I assume you\u0026rsquo;re going to refactor this gem, which you should do! There\u0026rsquo;s a lot of dirty code in it, right? I\u0026rsquo;m not doing that in this article though. But once you do, you will probably add more features or maybe remove some of them (like the colors for example), and when that happens, you would like to reflect this changes by bumping your version number.\nI use the Semantic Versioning, you should check it out if you don\u0026rsquo;t know about it. If you made a change and you want to bump your version number, you would have to do it manually opening the version.rb file and then committing your changes. However, we can do this way more easily with the gem-release extension. Let\u0026rsquo;s install it right now.\n$ gem install gem-release Fetching gem-release-2.0.1.gem Successfully installed gem-release-2.0.1 Parsing documentation for gem-release-2.0.1 Installing ri documentation for gem-release-2.0.1 Done installing documentation for gem-release after 0 seconds 1 gem installed Now, I want to make a little change to our gem, I don\u0026rsquo;t want to write dmp gen each time I want to generate a new passphrase, I want it to do it by default! Luckily, we can do this with the keyword default_task. Let\u0026rsquo;s open up our CLI.\nmodule Dmp class CLI \u0026lt; Thor default_task :gen_pass # -- CODE SNIPPED -- end end Now let\u0026rsquo;s check and commit our change.\n$ bundle exec dmp - Passphrase: whinny capitol balsamic colt washout lend cradling $ git commit -am \u0026#34;Add default task to CLI\u0026#34; After we have committed our changes, we can go ahead and bump our version number. As we might know, we should bump a minor version as we added functionality to our gem.\n$ gem bump --version minor Bumping dmp from version 0.1.0 to 0.2.0 Changing version in lib/dmp/version.rb from 0.1.0 to 0.2.0 Staging lib/dmp/version.rb $ git add lib/dmp/version.rb Creating commit $ git commit -m \u0026#34;Bump dmp to 0.2.0\u0026#34; [develop b54f1f4] Bump dmp to 0.2.0 1 file changed, 1 insertion(+), 1 deletion(-) All is good, thanks my friend. As you can see, the gem-release bumps our minor version and also commits our bump change in a single pass. After we have made all of our changes, it is a good idea to publish or gem so other people can download it and use it!\nPublishing our gem Publishing our gem its really easy. For this you should go to Rubygems.org and sign up with an email and password, as you will need to authenticate in order to manage your uploaded gems. Here\u0026rsquo;s how you do it.\n# First we build our gem with our gem specifications $ gem build dmp.gemspec Successfully built RubyGem Name: dmp Version: 0.1.0 File: dmp-0.1.0.gem After we have built our gem, it\u0026rsquo;s time to publish it.\n$ gem push dmp-0.1.0.gem Enter your RubyGems.org credentials. Don\u0026#39;t have an account yet? Create one at https://rubygems.org/sign_up Email: gem_author@example Password: Signed in. Pushing gem to RubyGems.org... Successfully registered gem: dmp-0.1.0.gem And that\u0026rsquo;s it! Your gem is available to the public, you can try this yourself by installing your gem hosted on the rubygems servers.\n$ gem install dmp Fetching dmp-0.2.4.gem Successfully installed dmp-0.2.4 Parsing documentation for dmp-0.2.4 Installing ri documentation for dmp-0.2.4 Done installing documentation for dmp after 0 seconds 1 gem installed $ dmp about ____ __ __ ____ | _ \\  | \\/ | | _ \\ | | | | | |\\/| | | |_) | | |_| | | | | | | __/ |____/ |_| |_| |_| version: 0.2.4 author: @__franccesco homepage: https://github.com/franccesco/dmp learn more: https://codingdose.info  EOF This was quite a ride right? If you have any questions then let me know in the comments bellow. Thank you for reading this article and I appreciate your time and efforts if you have followed this tutorial.\nYou can checkout the code for DMP in the github repository here: https://github.com/franccesco/dmp And also you can check out the gem at RubyGems.org: https://rubygems.org/gems/dmp\nIf you feel that you need to rewrite the dirty code or make things your way, then who am I to stop you? Go do it! I\u0026rsquo;m sure you\u0026rsquo;ll do a hell of a job! :)\n","permalink":"https://codingdose.info/posts/how-to-create-a-ruby-gem-with-bundler/","summary":"I\u0026rsquo;ve been writing and focusing on Python lately and I\u0026rsquo;ve been wanting to make more content about Ruby. Ruby was my very first language and the one that got me into this programming world.\nFor this entry I\u0026rsquo;m going to write how to create, test and publish our gem to RubyGems.org to make it available for everyone, and in future entries we\u0026rsquo;re going to see how to setup a CI/CD for automatic testing and deployment, Behavior Driven Testing with Cucumber/Aruba and Code Coverage with SimpleCov.","title":"How to Create a Ruby Gem With Bundler"},{"content":"Why should you do it?  \u0026ldquo;Using GPG, you can sign and verify tags and commits. With GPG keys, tags or commits that you\u0026rsquo;ve authored on GitHub are verified and other people can trust that the changes you\u0026rsquo;ve made really were made by you.\u0026rdquo;\n  About GPG | Github  Signing our commits is a great way to verify your commits and let your collaborators know that they can trust that you committed those changes in your project. We\u0026rsquo;re going to see how can we use a GPG key to sign our commits and also how to change git settings so it signs our commits automatically.\nThis guide will assume that you haven\u0026rsquo;t setup your GPG key yet.\nInstalling GPG GnuPG should be available on your system, if it\u0026rsquo;s not, then you can download it and follow the instructions to install it depending on your distribution here: https://www.gnupg.org/download/\nSetting up GPG This is a pretty easy step as the GnuPG has a wizard that can help us fill the requirements to create our GPG key with the gpg --full-generate-key command.\nWe\u0026rsquo;re going to generate a RSA 4096 bit long key with no expiration date on our email.\nNOTE: In order to verify your commits in Github, you must enter the email address that you have registered in Github and it should also match your email that you previously configured in git.\n$ gpg --full-generate-key Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (3072) 4096 Requested keysize is 4096 bits Please specify how long the key should be valid. 0 = key does not expire \u0026lt;n\u0026gt; = key expires in n days \u0026lt;n\u0026gt;w = key expires in n weeks \u0026lt;n\u0026gt;m = key expires in n months \u0026lt;n\u0026gt;y = key expires in n years Key is valid for? (0) 0 Key does not expire at all Is this correct? (y/N) y GnuPG needs to construct a user ID to identify your key. Real name: Franccesco Orozco Email address: franccesco.orozco@codingdose.info Comment: My First Key! You selected this USER-ID: \u0026#34;Franccesco Orozco (My First Key!) \u0026lt;franccesco.orozco@codingdose.info\u0026gt;\u0026#34; After that it will ask us to input a passphrase, remember to choose a good passphrase.\nWe need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. gpg: key 438D3A434DA6E6FA marked as ultimately trusted gpg: revocation certificate stored as '/home/franccesco/.gnupg/openpgp-revocs.d/D77C79FFD77BFD0B9BCE58FE438D3A434DA6E6FA.rev' public and secret key created and signed. pub rsa4096 2018-09-05 [SC] D77C79FFD77BFD0B9BCE58FE438D3A434DA6E6FA uid Franccesco Orozco (My First Key!) \u0026lt;franccesco.orozco@codingdose.info\u0026gt; sub rsa4096 2018-09-05 [E] That\u0026rsquo;s it! We were able to create our GPG key pretty easily, right? Let\u0026rsquo;s see what can we do to add it to Github.\nListing our key(s) Now that we have created our key, we can list it like this.\n$ gpg --list-secret-keys --keyid-format LONG franccesco.orozco sec rsa4096/438D3A434DA6E6FA 2018-09-05 [SC] D77C79FFD77BFD0B9BCE58FE438D3A434DA6E6FA uid [ultimate] Franccesco Orozco (My First Key!) \u0026lt;franccesco.orozco@codingdose.info\u0026gt; ssb rsa4096/5D21EBE255188195 2018-09-05 [E] What we\u0026rsquo;re looking in our key is the GPG Key ID, which is: 438D3A434DA6E6FA. With this identifier we can export the GPG Public key and paste it in Github.\n$ gpg --armor --export 438D3A434DA6E6FA -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFuP6u4BEADA6ZWSjy86eMpS6OczKgkPbytA7b5lzMcdwnSccwuX0w9/fVA7 yx+fuZZuKO1rHNR96wgq4m5Z9iUM7UQ5FG9g93CXUp6kmPcast3fpQ7D13Oq6lEy iNmxziJ3K/DQnEj8vgEl6vxDusBswRdYXHKytKt2pFngZqF/rtD0Mbf9shrGaI9B --- SNIP --- Hlupx07dHpBEsjaiKWL80GhKFSQNKO+oOlSZ537nRqcLUzU7zvc1qLp6Z6ZSAFjl E3aw43pKynoLYXvxUO1vi0En//jMSG4riLZDiZBkfM21 =QqhC -----END PGP PUBLIC KEY BLOCK----- If you have xclip package in your system and don\u0026rsquo;t want to select the whole key then here\u0026rsquo;s a great tip to automatically add it to your clipboard.\n$ gpg --armor --export 438D3A434DA6E6FA | xclip -sel c Add GPG key to Github Now it\u0026rsquo;s time to paste the key in Github, this is pretty easy and self explanatory, go to SSH and GPG Keys in your Github settings and click in New GPG Key.\nHere you can past your Public key and submit it. After that, you can see that you have already added your Github key, if its says unverified it\u0026rsquo;s because it doesn\u0026rsquo;t matches your email address in your Github account, so make sure everything matches with the email in your GPG key. Configure git with your key Now let\u0026rsquo;s make git aware of our new key with a global configuration.\n$ git config --global user.signingkey 438D3A434DA6E6FA After that we can start committing signed changes using the -S flag.\n$ git commit -S -m \u0026#34;Signed commit!\u0026#34; But of course, if you don\u0026rsquo;t want to set the -S flag every time then you can set it as default in your commits.\n$ git config --global commit.gpgsign true Push a signed commit Now, go ahead and feel free to push a commit in Github. From now on, all pushed commits will show as verified in your commit history.\nThis will give you more credibility and security to your projects, specially if you work with sensitive date or with a large number of people.\nFurther reading  Generating a new GPG Key Telling Git about your GPG key Signing commits using GPG Gitlab guide on GPG  ","permalink":"https://codingdose.info/posts/generate-and-sign-your-commits-with-gpg-in-github/","summary":"Why should you do it?  \u0026ldquo;Using GPG, you can sign and verify tags and commits. With GPG keys, tags or commits that you\u0026rsquo;ve authored on GitHub are verified and other people can trust that the changes you\u0026rsquo;ve made really were made by you.\u0026rdquo;\n  About GPG | Github  Signing our commits is a great way to verify your commits and let your collaborators know that they can trust that you committed those changes in your project.","title":"Generate and Verify Your Commits With GPG in GitHub"},{"content":"Today I was facing a problem, I didn\u0026rsquo;t know how to change Flask root directory. Flask by default look for templates and static files under the root directory (/), how can we change that?\nChanging the root path Here\u0026rsquo;s my directory structure:\n. ├── api_files │ ├── static │ │ └── style.css │ └── templates │ └── index.html ├── api.py ├── Pipfile └── Pipfile.lock 3 directories, 5 files I want Flask to be able to process the folders static and templates inside the api_files folder, my main Flask app is api.py which is outside the api_files folder. Let\u0026rsquo;s open it up and change that behavior so it can process the templates and static files inside that folder.\nfrom flask import Flask, render_template # here we can set a different root path app = Flask(__name__, root_path=\u0026#39;api_files/\u0026#39;) @app.route(\u0026#39;/\u0026#39;) def index(): \u0026#34;\u0026#34;\u0026#34;Render home page.\u0026#34;\u0026#34;\u0026#34; return render_template(\u0026#39;index.html\u0026#39;) # we can render templates as usual if __name__ == \u0026#39;__main__\u0026#39;: app.run() This way, Flask will no longer look for templates and static files under the project root (/) path, but inside the folder api_files instead.\nThere\u0026rsquo;s a lot of solutions out there about changing jinja2 configuration, I think this is a way more easier and cleaner approach.\nYou can read more about variables and the Flask API here.\n","permalink":"https://codingdose.info/posts/change-flask-root-folder/","summary":"Today I was facing a problem, I didn\u0026rsquo;t know how to change Flask root directory. Flask by default look for templates and static files under the root directory (/), how can we change that?\nChanging the root path Here\u0026rsquo;s my directory structure:\n. ├── api_files │ ├── static │ │ └── style.css │ └── templates │ └── index.html ├── api.py ├── Pipfile └── Pipfile.lock 3 directories, 5 files I want Flask to be able to process the folders static and templates inside the api_files folder, my main Flask app is api.","title":"Change Flask Root Folder for Templates and Static Files"},{"content":"I was suddenly in the need of deploying a very basic Flask API to the cloud, so it can be available to the public.\nThe thing is, Flask is not made for a production and scalable environment, but if you only need to deploy a very basic web server to Heroku then this guide is for you.\nInitialize a repository First of all we will need to set up a virtual enviroment with Pipenv, a Flask app, and initialize a repository.\n# create a new folder where we\u0026#39;ll initialize our repo. $ mkdir flask_app # Install Flask and Gunicorn $ pipenv install flask gunicorn # Create the necessary files $ touch runtime.txt Procfile app.py # Initialize a repository $ git init Create a simple flask app Open app.py and create a very simplistic app that returns a json string to see if it can be run with gunicorn:\n# app.py \u0026#34;\u0026#34;\u0026#34;Flask App Project.\u0026#34;\u0026#34;\u0026#34; from flask import Flask, jsonify app = Flask(__name__) @app.route(\u0026#39;/\u0026#39;) def index(): \u0026#34;\u0026#34;\u0026#34;Return homepage.\u0026#34;\u0026#34;\u0026#34; json_data = {\u0026#39;Hello\u0026#39;: \u0026#39;World!\u0026#39;} return jsonify(json_data) if __name__ == \u0026#39;__main__\u0026#39;: app.run() Run your Flask app with gunicorn Heroku will use Gunicorn to act as a web server for our Flask app, we have already installed gunicorn with pipenv, we have to run it with our flask app to make sure it works.\n$ pipenv run gunicorn app:app [2018-05-11 18:18:13 -0600] [6508] [INFO] Starting gunicorn 19.8.1 [2018-05-11 18:18:13 -0600] [6508] [INFO] Listening at: http://127.0.0.1:8000 (6508) [2018-05-11 18:18:13 -0600] [6508] [INFO] Using worker: sync [2018-05-11 18:18:13 -0600] [6561] [INFO] Booting worker with pid: 6561 It seems that it works flawlessly, let\u0026rsquo;s visit our project\u0026rsquo;s home page.\nAnd it is working as it should. Now we have to deploy our project with Heroku.\nDeployment to Heroku To deploy our project we will need to edit two files:\n Procfile runtime.txt  A Procfile will be used to let Heroku know how to handle our web server.\nweb: gunicorn app:app By default gunicorn bind the service on port 8000, if we wanted to change this port we could do so by appending the argument --bind ip_address:port. We can also pass environment variables in case that Heroku have a specific port.\nweb: gunicorn app:app --bind 0.0.0.0:$PORT Now let\u0026rsquo;s open runtime.txt, this file let Heroku know what version of python are we using.\npython-3.6.5 We\u0026rsquo;re letting Heroku know that we\u0026rsquo;re using Python 3.6.5 (which at the time is the latest and stable one).\nOnce we have defined our runtime, we should commit our changes.\n# add files $ git add -A # commit them $ git commit -am \u0026#34;first commit\u0026#34; Once we have a working directory tree, we should be able to create a Heroku app and push our project.\n# create a heroku app, you can leave name_here empty # if you wish heroku to pick a name for you. $ heroku create name_here Creating app... done, ⬢ name_here https://name_here.herokuapp.com/ | https://git.heroku.com/name_here.git # now we can push our project to our heroku app $ git push heroku master -- SNIP -- remote: -----\u0026gt; Python app detected remote: -----\u0026gt; Installing python-3.6.5 remote: -----\u0026gt; Installing pip remote: -----\u0026gt; Installing dependencies with Pipenv 11.8.2… remote: Installing dependencies from Pipfile.lock (59a99c)… remote: -----\u0026gt; Discovering process types remote: Procfile declares types -\u0026gt; web remote: remote: -----\u0026gt; Compressing... remote: Done: 53.9M remote: -----\u0026gt; Launching... remote: Released v3 remote: https://floating-badlands-13121.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy... done. To https://git.heroku.com/floating-badlands-13121.git * [new branch] master -\u0026gt; master And we\u0026rsquo;re done, we can visit our app by executing heroku open, it will open a browser tab with your app\u0026rsquo;s domain name.\nflask-heroku-example I wrote an example repository that already have a runtime and Procfile so you can deploy your flask project to Heroku, here\u0026rsquo;s the repo.\nIf you have any questions, feel free to comment bellow. Have fun.\n","permalink":"https://codingdose.info/posts/deploy-a-flask-project-to-heroku/","summary":"I was suddenly in the need of deploying a very basic Flask API to the cloud, so it can be available to the public.\nThe thing is, Flask is not made for a production and scalable environment, but if you only need to deploy a very basic web server to Heroku then this guide is for you.\nInitialize a repository First of all we will need to set up a virtual enviroment with Pipenv, a Flask app, and initialize a repository.","title":"Deploy a Flask Project to Heroku"},{"content":"Before we start: A short disclaimer Don\u0026rsquo;t do this. Don\u0026rsquo;t go there importing random code you found on the internet into your code because that can be dangerous and code would be injected into your machine or a client\u0026rsquo;s machine and you don\u0026rsquo;t want that. Remember to always have your code and modules in a version control system and that you have complete knowledge of what you\u0026rsquo;re loading.\n Today I was talking with some of the guys in a Discord server about python, called Python Discord (which I definitely recommend you to check out!), and stumbled upon a guy who was requesting assistance.\n(Yeah\u0026hellip; I\u0026rsquo;m theinquisitor.)\nWhat he wanted to do was to import a variable that contains a list object, which is perfectly fine, the only issue was that the variable was in a web page called dumptext.com.\nAnd to make things worst, when you inspect the web page\u0026rsquo;s source code you find that the text isn\u0026rsquo;t even raw text, I don\u0026rsquo;t know why would anyone add a raw button that doesn\u0026rsquo;t even return raw text.\nThis is a bad idea, but hey: I like to solve problems, and we will work with this scenario just for the fun of it, so let\u0026rsquo;s start with Solution #1.\nSolution #1: Create a module containing the variable name The first solution will consist in:\n Request the web page HTML\u0026rsquo;s source code. Parse the HTML code with BeautifulSoup Extract the contents inside the HTML\u0026rsquo;s \u0026lt;pre\u0026gt;\u0026lt;/pre\u0026gt; tags which contains the list we want. Save the list into a module called wordlist.py that we can import. Import the variable words from our newly created wordlist.py module.  Before we start, we have to create an empty module first that will hold our data: wordlist.py\n$ touch wordlist.py Now we proceed to create our main python code called: solution1.py\n\u0026#34;\u0026#34;\u0026#34;Solution 1.\u0026#34;\u0026#34;\u0026#34; import requests from bs4 import BeautifulSoup from importlib import reload # our module containing the variable with the list object import wordlist # request the HTML source and extract the text inside the \u0026#39;\u0026lt;pre\u0026gt;\u0026lt;/pre\u0026gt;\u0026#39; tags. dumptext_data = requests.get(\u0026#39;https://dumptext.com/Ai9Ww8j4/raw/\u0026#39;).text parsed_dumptext_html = BeautifulSoup(dumptext_data, \u0026#39;html.parser\u0026#39;) scraped_list = parsed_dumptext_html.pre.text # save the contents of the scraped_list into a module with open(\u0026#39;wordlist.py\u0026#39;, \u0026#39;w\u0026#39;) as wordlist_object: wordlist_object.write(scraped_list) # reload the module to import the variable \u0026#39;words\u0026#39; reload(wordlist) # now we can use the variable words everywhere we want. words = wordlist.words print(type(words)) print(words) Now, we are familiar with the packages requests and BeautifulSoup, but what about importlib? Well, importlib have a lot of utilities for our imports. In this case we imported the module reload() from importlib that allows us to reload a previous imported module, fresh as new, including the newly created variable holding our list. Remember that PEP8 recommends adding your imports at the top of the file, following your docstring, not in the middle of the file.\nLet\u0026rsquo;s see if our solution works well:\n$ python solution1.py \u0026lt;class \u0026#39;list\u0026#39;\u0026gt; [\u0026#39;aah\u0026#39;, \u0026#39;aal\u0026#39;, \u0026#39;aas\u0026#39;, \u0026#39;aba\u0026#39;, \u0026#39;abo\u0026#39;, \u0026#39;abs\u0026#39;, \u0026#39;aby\u0026#39;, \u0026#39;ace\u0026#39;, \u0026#39;act\u0026#39;, \u0026#39;add\u0026#39;, \u0026#39;ado\u0026#39;, \u0026#39;ads\u0026#39;, \u0026#39;adz\u0026#39;, \u0026#39;aff\u0026#39;, \u0026#39;aft\u0026#39;, \u0026#39;aga\u0026#39;, \u0026#39;age\u0026#39;, \u0026#39;ago\u0026#39;, \u0026#39;ags\u0026#39;, \u0026#39;aha\u0026#39;, \u0026#39;ahi\u0026#39;, \u0026#39;ahs\u0026#39;, \u0026#39;aid\u0026#39;, \u0026#39;ail\u0026#39;, \u0026#39;aim\u0026#39;, \u0026#39;ain\u0026#39;, \u0026#39;air\u0026#39;, \u0026#39;ais\u0026#39;, \u0026#39;ait\u0026#39;, \u0026#39;ala\u0026#39;, \u0026#39;alb\u0026#39;, \u0026#39;ale\u0026#39;, \u0026#39;all\u0026#39;, \u0026#39;alp\u0026#39;, \u0026#39;als\u0026#39;, \u0026#39;alt\u0026#39;, \u0026#39;ama\u0026#39;, \u0026#39;ami\u0026#39;, \u0026#39;amp\u0026#39;, \u0026#39;amu\u0026#39;, \u0026#39;ana\u0026#39;, \u0026#39;and\u0026#39;, \u0026#39;ane\u0026#39;, \u0026#39;ani\u0026#39;, \u0026#39;ant\u0026#39;, \u0026#39;any\u0026#39;, ...] It works!, we can see that our it is an object from the class list and it prints successfully.\nThere\u0026rsquo;s only one thing\u0026hellip; we shouldn\u0026rsquo;t do this. Importing random code from the Internet is not a great option, believe me. You could do this in very specific scenario where you absolutely don\u0026rsquo;t have any choice, the file is read only and also you are the owner. If it\u0026rsquo;s a controlled file then good, but realistically I don\u0026rsquo;t see that happening, so please, avoid this solution.\nIt was fun to write though.\nSolution #2: A Regular Expression to Match the List Items This is a much better (and cleaner) solution, what we\u0026rsquo;ll do is the same scraping as before, but instead of creating and loading a module, we will pick everything inside a pair of single quotes ('') from the parsed HTML code with a regular expression and add those results to a list.\nLet\u0026rsquo;s jump straight into the code: solution2.py\n\u0026#34;\u0026#34;\u0026#34;Solution 2.\u0026#34;\u0026#34;\u0026#34; import requests from bs4 import BeautifulSoup import re # request the HTML source and extract the text inside the \u0026#39;\u0026lt;pre\u0026gt;\u0026lt;/pre\u0026gt;\u0026#39; tags. dumptext_data = requests.get(\u0026#39;https://dumptext.com/Ai9Ww8j4/raw/\u0026#39;).text parsed_dumptext_html = BeautifulSoup(dumptext_data, \u0026#39;html.parser\u0026#39;) scraped_list = parsed_dumptext_html.pre.text # define a pattern that captures everything inside a pair of quotes (\u0026#39;\u0026#39;) pattern = re.compile(r\u0026#39;\\\u0026#39;(.*?)\\\u0026#39;\u0026#39;) # create a list of items using list comprehension with all the matching items words = [word for word in re.findall(pattern, scraped_list)] # now we can use the variable words everywhere we want. print(type(words)) print(words) Let\u0026rsquo;s see if it works correctly:\n$ python solution2.py \u0026lt;class \u0026#39;list\u0026#39;\u0026gt; [\u0026#39;aah\u0026#39;, \u0026#39;aal\u0026#39;, \u0026#39;aas\u0026#39;, \u0026#39;aba\u0026#39;, \u0026#39;abo\u0026#39;, \u0026#39;abs\u0026#39;, \u0026#39;aby\u0026#39;, \u0026#39;ace\u0026#39;, \u0026#39;act\u0026#39;, \u0026#39;add\u0026#39;, \u0026#39;ado\u0026#39;, \u0026#39;ads\u0026#39;, \u0026#39;adz\u0026#39;, \u0026#39;aff\u0026#39;, \u0026#39;aft\u0026#39;, \u0026#39;aga\u0026#39;, \u0026#39;age\u0026#39;, \u0026#39;ago\u0026#39;, \u0026#39;ags\u0026#39;, \u0026#39;aha\u0026#39;, \u0026#39;ahi\u0026#39;, \u0026#39;ahs\u0026#39;, \u0026#39;aid\u0026#39;, \u0026#39;ail\u0026#39;, \u0026#39;aim\u0026#39;, \u0026#39;ain\u0026#39;, \u0026#39;air\u0026#39;, \u0026#39;ais\u0026#39;, \u0026#39;ait\u0026#39;, \u0026#39;ala\u0026#39;, \u0026#39;alb\u0026#39;, \u0026#39;ale\u0026#39;, \u0026#39;all\u0026#39;, \u0026#39;alp\u0026#39;, \u0026#39;als\u0026#39;, \u0026#39;alt\u0026#39;, \u0026#39;ama\u0026#39;, \u0026#39;ami\u0026#39;, \u0026#39;amp\u0026#39;, \u0026#39;amu\u0026#39;, \u0026#39;ana\u0026#39;, \u0026#39;and\u0026#39;, \u0026#39;ane\u0026#39;, \u0026#39;ani\u0026#39;, \u0026#39;ant\u0026#39;, \u0026#39;any\u0026#39;, ...] That\u0026rsquo;s it!, no need for stinkin' modules and injecting random code into our code, no importing and no reloading:\n We imported the library re which aid us to search for strings using regular expression (Or RegEx). We use the same scraping technique as before. Now we define a RegEx pattern with re.compile(r'__pattern__') Create a List of words, where word is the result of all the words found in re.findall(pattern, scraped_list)  And that all, you can now use your words in a lot more safer way.\nPython Discord If you\u0026rsquo;re interested in being part of a community, then I cannot recommend you Python Discord more. It has helped me to learn so much about Python, people is really helpful and we\u0026rsquo;re are always growing.\nSo please, be my guest and hop in this awesome community, here\u0026rsquo;s your invite: https://pythondiscord.com/invite\nEveryone is welcome.\n","permalink":"https://codingdose.info/posts/case-import-variables-from-a-web-site-python/","summary":"Before we start: A short disclaimer Don\u0026rsquo;t do this. Don\u0026rsquo;t go there importing random code you found on the internet into your code because that can be dangerous and code would be injected into your machine or a client\u0026rsquo;s machine and you don\u0026rsquo;t want that. Remember to always have your code and modules in a version control system and that you have complete knowledge of what you\u0026rsquo;re loading.\n Today I was talking with some of the guys in a Discord server about python, called Python Discord (which I definitely recommend you to check out!","title":"Importing Variables from a Website"},{"content":"You don\u0026rsquo;t need a bunch of if\u0026rsquo;s and else if\u0026rsquo;s to create an array of directories in python, just the good ol' makedirs.\nCreate an array of directories To create an array of directories you must import the package os and import the method makedirs\n\u0026gt;\u0026gt;\u0026gt; from os import makedirs \u0026gt;\u0026gt;\u0026gt; makedirs(\u0026#39;1/2/3/4/5\u0026#39;) You\u0026rsquo;ll see now that your directories are created correctly, let\u0026rsquo;s run the command tree outside of python to see if they\u0026rsquo;re were actually created.\n$ tree 1 1 └── 2 └── 3 └── 4 └── 5 4 directories, 0 files You can see that all our directories where created (it says 4 because it is excluding the root directory - 1).\nWe can create any array of directories as long as the last directory that we want to create is not already created. Let\u0026rsquo;s create the directory number 6.\n\u0026gt;\u0026gt;\u0026gt; makedirs(\u0026#39;1/2/3/4/5/6\u0026#39;) Python does not complain, usually means it all went well, let\u0026rsquo;s inspect our tree.\n$ tree 1 1 └── 2 └── 3 └── 4 └── 5 └── 6 5 directories, 0 files Perfect, there are 5 directories created under 1 now. Would makedirs complain if we wanted to overwrite the same directory structure?\n\u0026gt;\u0026gt;\u0026gt; from os import makedirs \u0026gt;\u0026gt;\u0026gt; makedirs(\u0026#39;1/2/3/4/5/6\u0026#39;) Traceback (most recent call last): File \u0026#34;\u0026lt;stdin\u0026gt;\u0026#34;, line 1, in \u0026lt;module\u0026gt; File \u0026#34;/home/franccesco/.pyenv/versions/3.6.5/lib/python3.6/os.py\u0026#34;, line 220, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: \u0026#39;1/2/3/4/5/6\u0026#39; Yes, this is because makedirs couldn\u0026rsquo;t find a single directory to create (because they\u0026rsquo;re already created), but sometimes we don\u0026rsquo;t want makedirs to make a rant and exits abruptly if it encounters that the directory structure is already there.\nTo suppress this behavior we can pass the argument exist_ok=True to avoid makedirs to raise an exception.\n\u0026gt;\u0026gt;\u0026gt; from os import makedirs \u0026gt;\u0026gt;\u0026gt; makedirs(\u0026#39;1/2/3/4/5/6\u0026#39;, exist_ok=True) \u0026gt;\u0026gt;\u0026gt; makedirs(\u0026#39;1/2/3/4/5/6\u0026#39;, exist_ok=True) See? Now makedirs doesn\u0026rsquo;t complain if there\u0026rsquo;s already a directory structure created, our directories are still created, or just ignored if they\u0026rsquo;re already there.\n$ tree 1 1 └── 2 └── 3 └── 4 └── 5 └── 6 5 directories, 0 files ","permalink":"https://codingdose.info/posts/create-multiple-directories-with-makedirs-python/","summary":"You don\u0026rsquo;t need a bunch of if\u0026rsquo;s and else if\u0026rsquo;s to create an array of directories in python, just the good ol' makedirs.\nCreate an array of directories To create an array of directories you must import the package os and import the method makedirs\n\u0026gt;\u0026gt;\u0026gt; from os import makedirs \u0026gt;\u0026gt;\u0026gt; makedirs(\u0026#39;1/2/3/4/5\u0026#39;) You\u0026rsquo;ll see now that your directories are created correctly, let\u0026rsquo;s run the command tree outside of python to see if they\u0026rsquo;re were actually created.","title":"Create Multiple Directories With Makedirs in Python"},{"content":"Deques are a great way to handle memory efficient appends to a list-like object, it is a special module that allows you to handle list items in a more appropriate way.\nCreate a deque To create a list simply import the deque module from collections library and call deque(_items_) on a variable.\n\u0026gt;\u0026gt;\u0026gt; from collections import deque \u0026gt;\u0026gt;\u0026gt; dq = deque(\u0026#39;123\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;]) \u0026gt;\u0026gt;\u0026gt; type(dq) \u0026lt;class \u0026#39;collections.deque\u0026#39;\u0026gt; Or if you wish to create an empty deque.\n\u0026gt;\u0026gt;\u0026gt; dq = deque() \u0026gt;\u0026gt;\u0026gt; dq deque([]) What happens if you want to create a deque of integers?\n\u0026gt;\u0026gt;\u0026gt; dq = deque(123) Traceback (most recent call last): File \u0026#34;\u0026lt;stdin\u0026gt;\u0026#34;, line 1, in \u0026lt;module\u0026gt; TypeError: \u0026#39;int\u0026#39; object is not iterable You simply can\u0026rsquo;t. Why is that? Because Integers are not iterable in python but String are.\nThis is because Integers, unlike strings, don\u0026rsquo;t have a __iter__ method and therefore they don\u0026rsquo;t return iterables.\n\u0026gt;\u0026gt;\u0026gt; str.__iter__ \u0026lt;slot wrapper \u0026#39;__iter__\u0026#39; of \u0026#39;str\u0026#39; objects\u0026gt; \u0026gt;\u0026gt;\u0026gt; int.__iter__ Traceback (most recent call last): File \u0026#34;\u0026lt;stdin\u0026gt;\u0026#34;, line 1, in \u0026lt;module\u0026gt; AttributeError: type object \u0026#39;int\u0026#39; has no attribute \u0026#39;__iter__\u0026#39; Accessing items by index We can access items in a deque with an index number.\n\u0026gt;\u0026gt;\u0026gt; dq[0] \u0026#39;1\u0026#39; \u0026gt;\u0026gt;\u0026gt; dq[1] \u0026#39;2\u0026#39; \u0026gt;\u0026gt;\u0026gt; dq[2] \u0026#39;3\u0026#39; Converting an item to integer You can convert an item to a item simply by wrapping them around an int() method.\n\u0026gt;\u0026gt;\u0026gt; one = int(dq[0]) \u0026gt;\u0026gt;\u0026gt; type(one) \u0026lt;class \u0026#39;int\u0026#39;\u0026gt; \u0026gt;\u0026gt;\u0026gt; one 1 Appending items to a deque We can append new items to our deque, to either left side or right side.\n\u0026gt;\u0026gt;\u0026gt; dq.append(\u0026#39;4\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;]) \u0026gt;\u0026gt;\u0026gt; dq.appendleft(\u0026#39;0\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;]) Extending our deque We can also add multiple values at once.\n\u0026gt;\u0026gt;\u0026gt; dq.extend(\u0026#39;567\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;, \u0026#39;5\u0026#39;, \u0026#39;6\u0026#39;, \u0026#39;7\u0026#39;]) \u0026gt;\u0026gt;\u0026gt; dq.extendleft(\u0026#39;cba\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;a\u0026#39;, \u0026#39;b\u0026#39;, \u0026#39;c\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;, \u0026#39;5\u0026#39;, \u0026#39;6\u0026#39;, \u0026#39;7\u0026#39;]) Popping items We can delete items in both sides.\n\u0026gt;\u0026gt;\u0026gt; dq.pop() \u0026#39;7\u0026#39; \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;a\u0026#39;, \u0026#39;b\u0026#39;, \u0026#39;c\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;, \u0026#39;5\u0026#39;, \u0026#39;6\u0026#39;]) \u0026gt;\u0026gt;\u0026gt; dq.popleft() \u0026#39;a\u0026#39; \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;b\u0026#39;, \u0026#39;c\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;, \u0026#39;5\u0026#39;, \u0026#39;6\u0026#39;]) Rotating items Or rotate our items if we want.\n\u0026gt;\u0026gt;\u0026gt; dq.rotate(-2) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;, \u0026#39;5\u0026#39;, \u0026#39;6\u0026#39;, \u0026#39;b\u0026#39;, \u0026#39;c\u0026#39;]) \u0026gt;\u0026gt;\u0026gt; dq.rotate(2) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;b\u0026#39;, \u0026#39;c\u0026#39;, \u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;, \u0026#39;5\u0026#39;, \u0026#39;6\u0026#39;]) Slicing deques We cannot slice our deques, at least not directly.\n\u0026gt;\u0026gt;\u0026gt; dq[2:] Traceback (most recent call last): File \u0026#34;\u0026lt;stdin\u0026gt;\u0026#34;, line 1, in \u0026lt;module\u0026gt; TypeError: sequence index must be integer, not \u0026#39;slice\u0026#39; You can import itertools and return a sliced list (not a deque) of items with islice() method.\n\u0026gt;\u0026gt;\u0026gt; import itertools \u0026gt;\u0026gt;\u0026gt; list(itertools.islice(dq, 2, 9)) [\u0026#39;0\u0026#39;, \u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;, \u0026#39;4\u0026#39;, \u0026#39;5\u0026#39;, \u0026#39;6\u0026#39;] You can find more information about deques in the official Python documentation:\n Deque Objects  ","permalink":"https://codingdose.info/posts/deques-in-python/","summary":"Deques are a great way to handle memory efficient appends to a list-like object, it is a special module that allows you to handle list items in a more appropriate way.\nCreate a deque To create a list simply import the deque module from collections library and call deque(_items_) on a variable.\n\u0026gt;\u0026gt;\u0026gt; from collections import deque \u0026gt;\u0026gt;\u0026gt; dq = deque(\u0026#39;123\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dq deque([\u0026#39;1\u0026#39;, \u0026#39;2\u0026#39;, \u0026#39;3\u0026#39;]) \u0026gt;\u0026gt;\u0026gt; type(dq) \u0026lt;class \u0026#39;collections.deque\u0026#39;\u0026gt; Or if you wish to create an empty deque.","title":"How to Handle a Deque in Python"},{"content":"What is Flask?  Flask is a microframework for Python based on Werkzeug, Jinja 2 and good intentions.\n Flask is a microframework for Python that you can use to quickly create API\u0026rsquo;s and websites. It\u0026rsquo;s a great and easy to use platform, let\u0026rsquo;s create a simple API, but first we will go through the basics.\nFlask Installation We\u0026rsquo;re going to install Flask in our virtual environment to later import it into our code.\npipenv install flask A basic web server Ok, now that we have installed Flask in our virtual environment, we will create the main page, let\u0026rsquo;s create a file named api.py and input the following code snippet.\n\u0026#34;\u0026#34;\u0026#34;Flask API.\u0026#34;\u0026#34;\u0026#34; from flask import Flask app = Flask(__name__) @app.route(\u0026#39;/\u0026#39;) def index(): \u0026#34;\u0026#34;\u0026#34;Return main page.\u0026#34;\u0026#34;\u0026#34; return \u0026#39;It Works!\u0026#39; if __name__ == \u0026#39;__main__\u0026#39;: app.run() That\u0026rsquo;s enough to lift our API and return It Works! in the main page.\nWorking with Variables Let\u0026rsquo;s create a function to our, let\u0026rsquo;s add a salute(name) method that accepts one argument as a name and returns 'Hello, {name}!'\n\u0026#34;\u0026#34;\u0026#34;Flask API.\u0026#34;\u0026#34;\u0026#34; from flask import Flask app = Flask(__name__) @app.route(\u0026#39;/\u0026#39;) def index(): \u0026#34;\u0026#34;\u0026#34;Return main page.\u0026#34;\u0026#34;\u0026#34; return \u0026#39;It Works!\u0026#39; @app.route(\u0026#39;/\u0026lt;name\u0026gt;\u0026#39;) def salute(name): \u0026#34;\u0026#34;\u0026#34;Salute someone.\u0026#34;\u0026#34;\u0026#34; return f\u0026#39;Hello, {name}!\u0026#39; if __name__ == \u0026#39;__main__\u0026#39;: app.run() You can also define if a variable should be Integer, String, etc, like this:\n@app.route(\u0026#39;/\u0026lt;string:name\u0026gt;\u0026#39;) Here\u0026rsquo;s a reference table:\n   Type Comment     string Accepts a string without slashes   int Accepts integers   float For floating point value   path Accept strings with slashe   any Accepts one of the provided item   UUID Accepts UUID strings    Return a JSON Encoded String We can Jsonify a dictionary and return it to the browser as a JSON serialized data, let\u0026rsquo;s input our dog data into a dictionary and serialize it:\n# -- SNIP -- @app.route(\u0026#39;/mydog\u0026#39;) def get_dog(): \u0026#34;\u0026#34;\u0026#34;Return JSON data about our dog.\u0026#34;\u0026#34;\u0026#34; doggo = {\u0026#39;name\u0026#39;: \u0026#39;sam\u0026#39;, \u0026#39;food\u0026#39;: \u0026#39;meat\u0026#39;, \u0026#39;hobby\u0026#39;: \u0026#39;sleep\u0026#39;} return jsonify(doggo) And when we go to /mydog URL we can see our serialized dictionary:\nA Flask Project Let\u0026rsquo;s create a little Flask project to warm up, shall we? We\u0026rsquo;re going to do an API that scans localhost with Nmap and return its open ports with only one method.\napi.py\n\u0026#34;\u0026#34;\u0026#34;An Nmap Scanner and Parser.\u0026#34;\u0026#34;\u0026#34; import xmltodict from subprocess import run from flask import Flask, jsonify app = Flask(__name__) @app.route(\u0026#39;/\u0026#39;) def index(): \u0026#34;\u0026#34;\u0026#34;Return the main page.\u0026#34;\u0026#34;\u0026#34; return \u0026#34;Nmap Scanner, a more sofisticated page is coming soon.\u0026#34; @app.route(\u0026#39;/openports\u0026#39;) def scan_localhost(): \u0026#34;\u0026#34;\u0026#34;Scan a host with Nmap.\u0026#34;\u0026#34;\u0026#34; # run nmap scan from another proccess run([\u0026#39;nmap\u0026#39;, \u0026#39;-T5\u0026#39;, \u0026#39;--open\u0026#39;, \u0026#39;-oX\u0026#39;, \u0026#39;scan.xml\u0026#39;, \u0026#39;localhost\u0026#39;]) # parse Nmap XML report and covert it to a dictionary with open(\u0026#39;scan.xml\u0026#39;) as raw_xml: nmap_scan = xmltodict.parse(raw_xml.read()) # Jsonify the dictionary and return it return jsonify(nmap_scan) if __name__ == \u0026#39;__main__\u0026#39;: app.run() So, what are we doing here?\n Firstly, we import the libraries Flask for our API of course, we also import run from the package subprocess which allows us to run subprocesses from Python, the library xmltodict to parse XML data and convert it into a dictionary so we can return it later as a Jsonified data. Now we defined our route openports  @app.route(\u0026#39;/openports\u0026#39;) def scan_localhost(): \u0026#34;\u0026#34;\u0026#34;Scan a host with Nmap.\u0026#34;\u0026#34;\u0026#34;  Spawn a Nmap subprocess that scan our localhost looking for open ports only, we designate our XML output to scan.xml.  # run nmap scan from another proccess run([\u0026#39;nmap\u0026#39;, \u0026#39;-T5\u0026#39;, \u0026#39;--open\u0026#39;, \u0026#39;-oX\u0026#39;, \u0026#39;scan.xml\u0026#39;, \u0026#39;localhost\u0026#39;])  We open our report scan.xml, convert it to a dictionary with xmltodict.parse(raw_xml.read()) and store it into nmap_scan.  # parse Nmap XML report and covert it to a dictionary with open(\u0026#39;scan.xml\u0026#39;) as raw_xml: nmap_scan = xmltodict.parse(raw_xml.read())  Last, but not least, we return the dictionary into a Jsonified data.  # Jsonify the dictionary and return it return jsonify(nmap_scan) And when we visit https://localhost/openports we can successfully see our API in action: Of course that our code could be way, waaaaay better, but a simple example will suffice.\nRemember to checkout the Flask documentation, it is immensely useful and we only scratched the surface here.\n","permalink":"https://codingdose.info/posts/create-a-basic-api-with-flask/","summary":"What is Flask?  Flask is a microframework for Python based on Werkzeug, Jinja 2 and good intentions.\n Flask is a microframework for Python that you can use to quickly create API\u0026rsquo;s and websites. It\u0026rsquo;s a great and easy to use platform, let\u0026rsquo;s create a simple API, but first we will go through the basics.\nFlask Installation We\u0026rsquo;re going to install Flask in our virtual environment to later import it into our code.","title":"Create a Basic API With Flask"},{"content":"Namedtuples are a subclass of typename which allow us to create objects (or classes) with tuple-like attributes, it\u0026rsquo;s easy to create a class with immutable attributes with this library.\nCreating an object To create a namedtuple we only need the name of the class first and then the attribute list.\n\u0026gt;\u0026gt;\u0026gt; from collections import namedtuple \u0026gt;\u0026gt;\u0026gt; Person = namedtuple(\u0026#39;Person\u0026#39;, \u0026#39;name job_position sex married\u0026#39;) \u0026gt;\u0026gt;\u0026gt; john = Person(\u0026#39;john\u0026#39;, \u0026#39;developer\u0026#39;, \u0026#39;male\u0026#39;, False) \u0026gt;\u0026gt;\u0026gt; john Person(name=\u0026#39;john\u0026#39;, job_position=\u0026#39;developer\u0026#39;, sex=\u0026#39;male\u0026#39;, married=False) Be aware that you can pass an argument list as a string ('att1 att2'), as an actual list (['att1', 'att2']) or a string with comma separated values ('att1, att2').\nAccessing attributes We can access the person attributes just like any other class.\n\u0026gt;\u0026gt;\u0026gt; john.name \u0026#39;john\u0026#39; \u0026gt;\u0026gt;\u0026gt; john.job_position \u0026#39;developer\u0026#39; \u0026gt;\u0026gt;\u0026gt; john.sex \u0026#39;male\u0026#39; \u0026gt;\u0026gt;\u0026gt; john.married False Docstring We can also assign docstring to our attributes.\n\u0026gt;\u0026gt;\u0026gt; Person.__doc__ = \u0026#39;A person attributes\u0026#39; \u0026#39;A person attributes\u0026#39; \u0026gt;\u0026gt;\u0026gt; Person.name.__doc__ = \u0026#34;A person\u0026#39;s name\u0026#34; \u0026#34;A person\u0026#39;s name\u0026#34; \u0026gt;\u0026gt;\u0026gt; Person.job_position.__doc__ = \u0026#34;A person\u0026#39;s job position\u0026#34; \u0026#34;A person\u0026#39;s job position\u0026#34; \u0026gt;\u0026gt;\u0026gt; Person.sex.__doc__ = \u0026#39;Yes\u0026#39; \u0026#39;Yes\u0026#39; \u0026gt;\u0026gt;\u0026gt; Person.married.__doc__ = \u0026#39;Is the person married?\u0026#39; \u0026#39;Is the person married?\u0026#39; Changing values As our attribute are tuples we cannot change these attributes.\n\u0026gt;\u0026gt;\u0026gt; john.name = \u0026#39;Johnny\u0026#39; Traceback (most recent call last): File \u0026#34;\u0026lt;stdin\u0026gt;\u0026#34;, line 1, in \u0026lt;module\u0026gt; AttributeError: can\u0026#39;t set attribute But we can copy the class with the new values.\n\u0026gt;\u0026gt;\u0026gt; johnny = john._replace(name=\u0026#39;johnny\u0026#39;, married=True) \u0026gt;\u0026gt;\u0026gt; johnny Person(name=\u0026#39;johnny\u0026#39;, job_position=\u0026#39;developer\u0026#39;, sex=\u0026#39;male\u0026#39;, married=True) Convert a Dictionary to a Class Using the same class Person that we defined above.\n\u0026gt;\u0026gt;\u0026gt; d = {\u0026#39;name\u0026#39;: \u0026#39;john\u0026#39;, \u0026#39;job_position\u0026#39;: \u0026#39;developer\u0026#39;, \u0026#39;sex\u0026#39;: \u0026#39;male\u0026#39;, \u0026#39;married\u0026#39;: False} \u0026gt;\u0026gt;\u0026gt; john = Person(**d) \u0026gt;\u0026gt;\u0026gt; john You can read more about namedtuples here\n","permalink":"https://codingdose.info/posts/create-and-object-with-namedtuple/","summary":"Namedtuples are a subclass of typename which allow us to create objects (or classes) with tuple-like attributes, it\u0026rsquo;s easy to create a class with immutable attributes with this library.\nCreating an object To create a namedtuple we only need the name of the class first and then the attribute list.\n\u0026gt;\u0026gt;\u0026gt; from collections import namedtuple \u0026gt;\u0026gt;\u0026gt; Person = namedtuple(\u0026#39;Person\u0026#39;, \u0026#39;name job_position sex married\u0026#39;) \u0026gt;\u0026gt;\u0026gt; john = Person(\u0026#39;john\u0026#39;, \u0026#39;developer\u0026#39;, \u0026#39;male\u0026#39;, False) \u0026gt;\u0026gt;\u0026gt; john Person(name=\u0026#39;john\u0026#39;, job_position=\u0026#39;developer\u0026#39;, sex=\u0026#39;male\u0026#39;, married=False) Be aware that you can pass an argument list as a string ('att1 att2'), as an actual list (['att1', 'att2']) or a string with comma separated values ('att1, att2').","title":"Create and Object With Namedtuple"},{"content":"There\u0026rsquo;s a great library added in Python 3.5 that lets you return a list of filenames and folders called glob, here\u0026rsquo;s how to use it.\nReturn files and folders in current folder. \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;**\u0026#39;) [\u0026#39;scaffolds\u0026#39;, \u0026#39;node_modules\u0026#39;, \u0026#39;yarn.lock\u0026#39;, \u0026#39;_config.yml\u0026#39;, \u0026#39;source\u0026#39;, \u0026#39;db.json\u0026#39;, \u0026#39;themes\u0026#39;, \u0026#39;package.json\u0026#39;, \u0026#39;package-lock.json\u0026#39;] Return files and folders recursively \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;**\u0026#39;, recursive=True) [\u0026#39;scaffolds\u0026#39;, \u0026#39;scaffolds/post.md\u0026#39;, \u0026#39;scaffolds/page.md\u0026#39;, \u0026#39;scaffolds/draft.md\u0026#39;, \u0026#39;node_modules\u0026#39;, \u0026#39;...\u0026#39;] Return only specific type of files  recursively  \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;*.json\u0026#39;, recursive=True) [\u0026#39;db.json\u0026#39;, \u0026#39;package.json\u0026#39;, \u0026#39;package-lock.json\u0026#39;]  non-recursively  \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;**/*.md\u0026#39;, recursive=True) [\u0026#39;scaffolds/post.md\u0026#39;, \u0026#39;scaffolds/page.md\u0026#39;, \u0026#39;scaffolds/draft.md\u0026#39;, \u0026#39;...\u0026#39;]  recursively from another folder  \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;source/**/*.md\u0026#39;, recursive=True) [\u0026#39;source/README.md\u0026#39;, \u0026#39;source/about/index.md\u0026#39;, \u0026#39;source/_posts/class-inheritance-with-python.md\u0026#39;, \u0026#39;source/_posts/How-to-securely-store-sensitive-configuration-with-dotenv.md\u0026#39;, \u0026#39;...\u0026#39;] Print a list of filenames:  Relative path  \u0026gt;\u0026gt;\u0026gt; for file in glob(\u0026#39;source/**/*.md\u0026#39;, recursive=True): ... print(file) ... \u0026#39;source/README.md\u0026#39; \u0026#39;source/about/index.md\u0026#39; \u0026#39;source/_posts/class-inheritance-with-python.md\u0026#39; \u0026#39;source/_posts/How-to-securely-store-sensitive-configuration-with-dotenv.md\u0026#39; \u0026#39;source/_posts/Hello-all.md\u0026#39; \u0026#39;source/_posts/return-a-list-of-files-and-absolute-path-with-python.md\u0026#39; \u0026#39;source/_posts/scraping-web-data-with-requests-and-beautifulsoup-part-2.md\u0026#39;  Absolute path  \u0026gt;\u0026gt;\u0026gt; import os \u0026gt;\u0026gt;\u0026gt; from glob import glob \u0026gt;\u0026gt;\u0026gt; for file in glob(\u0026#39;source/**/*.md\u0026#39;, recursive=True): ... print(os.path.abspath(file)) ... \u0026#39;/home/franccesco/workspace/codingdose.info/source/README.md\u0026#39; \u0026#39;/home/franccesco/workspace/codingdose.info/source/about/index.md\u0026#39; \u0026#39;/home/franccesco/workspace/codingdose.info/source/_posts/class-inheritance-with-python.md\u0026#39; \u0026#39;/home/franccesco/workspace/codingdose.info/source/_posts/How-to-securely-store-sensitive-configuration-with-dotenv.md\u0026#39; \u0026#39;/home/franccesco/workspace/codingdose.info/source/_posts/Hello-all.md\u0026#39; \u0026#39;/home/franccesco/workspace/codingdose.info/source/_posts/return-a-list-of-files-and-absolute-path-with-python.md\u0026#39; \u0026#39;/home/franccesco/workspace/codingdose.info/source/_posts/scraping-web-data-with-requests-and-beautifulsoup-part-2.md\u0026#39; ","permalink":"https://codingdose.info/posts/return-a-list-of-files-and-absolute-path-with-python/","summary":"There\u0026rsquo;s a great library added in Python 3.5 that lets you return a list of filenames and folders called glob, here\u0026rsquo;s how to use it.\nReturn files and folders in current folder. \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;**\u0026#39;) [\u0026#39;scaffolds\u0026#39;, \u0026#39;node_modules\u0026#39;, \u0026#39;yarn.lock\u0026#39;, \u0026#39;_config.yml\u0026#39;, \u0026#39;source\u0026#39;, \u0026#39;db.json\u0026#39;, \u0026#39;themes\u0026#39;, \u0026#39;package.json\u0026#39;, \u0026#39;package-lock.json\u0026#39;] Return files and folders recursively \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;**\u0026#39;, recursive=True) [\u0026#39;scaffolds\u0026#39;, \u0026#39;scaffolds/post.md\u0026#39;, \u0026#39;scaffolds/page.md\u0026#39;, \u0026#39;scaffolds/draft.md\u0026#39;, \u0026#39;node_modules\u0026#39;, \u0026#39;...\u0026#39;] Return only specific type of files  recursively  \u0026gt;\u0026gt;\u0026gt; glob(\u0026#39;*.json\u0026#39;, recursive=True) [\u0026#39;db.json\u0026#39;, \u0026#39;package.","title":"Return a List of Files and Folders With Glob in Python"},{"content":" I encourage developers to see the value of unit testing; I urge them to get into the habit of writing structured tests alongside their code. — CodingHorror\n I was reading about Unit Testing and found a blog entry in CodingHorror called I Pity The Fool Who Doesn\u0026rsquo;t Write Unit Tests, and guess what, he\u0026rsquo;s right.\nSadly, I\u0026rsquo;ve encountered a lot of people who doesn\u0026rsquo;t write tests for their code, and have a CI System (Travis, Jenkins, Gitlab, etc.) only to test the syntax of the code with flake8 or pycodestyle, or the result of calling the software without returning an exception\u0026hellip; this is wrong, you\u0026rsquo;re not testing anything there.\nYou should be able to test your code to ensure that you\u0026rsquo;re getting the expected output and then have the confidence to refactor or write new functions without the fear of breaking anything.\nThere is value in testing, and I\u0026rsquo;m not talking about religiously test first code later, for me that\u0026rsquo;s a personal matter, all that I\u0026rsquo;m saying is that your code won\u0026rsquo;t have the quality it needs without testing. So stop neglecting Unit Testing and I invite you to read the CodingHorror blog post.\nLet\u0026rsquo;s jump right into it with a primer for python.\nUnit Testing a Method Honestly, you should write a failing test first, then write just the necessary code to make it work, and lastly refactor, from now on just rinse and repeat. But as we\u0026rsquo;re still learning how to code I\u0026rsquo;ll write the code first and let\u0026rsquo;s test it later.\nWe\u0026rsquo;re going to write a module called say_hi.pi that has a function called salute(name) which takes a name as a single argument and returns Hello, \u0026lt;name\u0026gt;!. Here\u0026rsquo;s our code for say_hi.py:\ndef salute(name): return \u0026#39;Hello, {}!\u0026#39;.format(name) Now let\u0026rsquo;s write a test called test_say_hi.py for this module to ensure that our code always return the desired string:\nimport unittest from say_hi import salute class TestSayHi(unittest.TestCase): \u0026#34;\u0026#34;\u0026#34;Class for testing say_hi.py\u0026#34;\u0026#34;\u0026#34; def test_salute(self): \u0026#34;\u0026#34;\u0026#34;Test salute() function.\u0026#34;\u0026#34;\u0026#34; self.assertEqual(salute(\u0026#39;Anne\u0026#39;), \u0026#39;Hello, Anne!\u0026#39;) if __name__ == \u0026#39;__main__\u0026#39;: unittest.main() Seems a bit long, huh? We\u0026rsquo;ll break it later, for now let\u0026rsquo;s see if our code works:\n$ python test_say_hi.py Aaand guess what?:\n. ---------------------------------------------------------------------- Ran 1 test in 0.000s OK It works correctly, so what is happening in that test? Let\u0026rsquo;s break it down.\nInspecting the Test  First we import the unittest module that will let us write tests for our code:  import unittest Now we import the function salute from the python file say_hi.py that we just wrote.  from say_hi import salute We create a new class that inherits from unittest.TestCase, we\u0026rsquo;re are making a new test case class that will hold our methods to test our say_hi.py functions.  class TestSayHi(unittest.TestCase): \u0026#34;\u0026#34;\u0026#34;Class for testing say_hi.py\u0026#34;\u0026#34;\u0026#34; Now we define our first class method, this method will test our function salute, and this is the important part, what are we doing with self.assertEqual? We are making a comparison here, we are asking our test: The function salute('Anne') should return exactly: Hello, Anne!, if it\u0026rsquo;s not the same, then complain!  def test_salute(self): \u0026#34;\u0026#34;\u0026#34;Test salute() function.\u0026#34;\u0026#34;\u0026#34; self.assertEqual(salute(\u0026#39;Anne\u0026#39;), \u0026#39;Hello, Anne!\u0026#39;) Lastly this line here is in charge of executing or Unit Tests if it\u0026rsquo;s called directly as a standalone program.  if __name__ == \u0026#39;__main__\u0026#39;: unittest.main() Testing a Class Now that we have the hang of it, let\u0026rsquo;s create our own class, it shouldn\u0026rsquo;t be hard at all.\nLet\u0026rsquo;s create a Raccoon class with a name (because, why not?) and a rabid status of False\u0026hellip; because we don\u0026rsquo;t want a rabid raccoon.\nTha only thing that is going to change is that we are making things the \u0026ldquo;right way\u0026rdquo; now. We\u0026rsquo;re making a Test first, then we write code, and finally we refactor, so let\u0026rsquo;s dive directly into the unit test.\ntest_raccoon.py\nimport unittest from raccoon import Raccoon class TestRabidRaccoon(unittest.TestCase): \u0026#34;\u0026#34;\u0026#34;Test if raccoon class.\u0026#34;\u0026#34;\u0026#34; def test_raccoon_health(self): \u0026#34;\u0026#34;\u0026#34;Test if the raccoon is rabid or not.\u0026#34;\u0026#34;\u0026#34; # let\u0026#39;s name our favorite raccoon \u0026#39;Helga\u0026#39; helga = Raccoon(\u0026#39;helga\u0026#39;) self.assertEqual(helga.rabid, False) if __name__ == \u0026#39;__main__\u0026#39;: unittest.main() Now let\u0026rsquo;s execute our test and let\u0026rsquo;s see how our tests guide us to write our code:\n$ python test_raccoon.py Traceback (most recent call last): File \u0026#34;test_raccoon.py\u0026#34;, line 2, in \u0026lt;module\u0026gt; from raccoon import Raccoon ImportError: cannot import name \u0026#39;Raccoon\u0026#39; Our test complains that there\u0026rsquo;s no such module called Raccoon, of course there isn\u0026rsquo;t one because we haven\u0026rsquo;t created one, so let\u0026rsquo;s do it:\nracoon.py\nclass Raccoon(object): def __init__(self, name): self.name = name self.rabid = False Now, we execute our test again:\n$ python test_raccoon.py . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK And we find out that it\u0026rsquo;s all alright, now it\u0026rsquo;s time to refactor, as we haven\u0026rsquo;t written almost anything, we should add at least documentation to our class without the fear of breaking anything, unit tests got our backs.\nracoon.py\n\u0026#34;\u0026#34;\u0026#34;Module holding Raccoon class.\u0026#34;\u0026#34;\u0026#34; class Raccoon(object): \u0026#34;\u0026#34;\u0026#34;Class simulating a rabid Raccoon.\u0026#34;\u0026#34;\u0026#34; def __init__(self, name): \u0026#34;\u0026#34;\u0026#34;Initialize attributes.\u0026#34;\u0026#34;\u0026#34; self.name = name self.rabid = False We added simple docstrings to our module, let\u0026rsquo;s see if we didn\u0026rsquo;t break anything:\n$ python test_raccoon.py . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK Refactoring our Unit Test Our test could\u0026rsquo;ve been better and we need to refactor it, because there are minor details that we should implement, let\u0026rsquo;s introduce the setUp method.\nFirst of all, if our test is going to use the name Helga multiple times, then we should define a setUp method so we can define our name only one time to avoid repetition, we do this inside our Test Case class TestRabidRaccoon. Also we\u0026rsquo;re changing assertEqual for a more appropriate a shorter way to check for False which is assertFalse:\nimport unittest from raccoon import Raccoon class TestRabidRaccoon(unittest.TestCase): \u0026#34;\u0026#34;\u0026#34;Test if raccoon class.\u0026#34;\u0026#34;\u0026#34; def setUp(self): \u0026#34;\u0026#34;\u0026#34;Setup variables that we\u0026#39;re going to use across our class\u0026#34;\u0026#34;\u0026#34; self.name = \u0026#39;Helga\u0026#39; def test_raccoon_health(self): \u0026#34;\u0026#34;\u0026#34;Test if the raccoon is rabid or not.\u0026#34;\u0026#34;\u0026#34; # let\u0026#39;s pass the previously defined name in setUp helga = Raccoon(self.name) # change assertEqual for a more proper method to check False self.assertFalse(helga.rabid) if __name__ == \u0026#39;__main__\u0026#39;: unittest.main() . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK [Finished in 0.167s] It runs as expected. There are other assertion methods that you can use to test your code, here\u0026rsquo;s a short list:\n   Method Description     assertEqual(A, B) Check if A is equal to B   assertNotEqual(A, B) Check if A is not equal to B   assertTrue(A) Check if A is returns True   assertFalse(A) Check if A returns False   assertIn(item, list) Check if item is in list   assertNotIn(item, list) Check if item is not in list    Conclusion This is enough to get you started, there\u0026rsquo;s a lot of value in testing, and you should do it, you\u0026rsquo;ll become a better developer and you\u0026rsquo;ll also adopt good practices, if you\u0026rsquo;re barely starting coding then you will be able to use your test as a guidance because you will be forced to think what you want your code to do first before actually code it.\nIf you are an experienced developer then you will catch bugs easier, you\u0026rsquo;ll have confidence when you\u0026rsquo;re refactoring your code, things are less likely to break, and you will be able to implement new features exactly as you want them.\nSo, code first and test later, or test first and code later, that\u0026rsquo;s OK for me, as long as you are testing it.\n","permalink":"https://codingdose.info/posts/unit-testing-with-python-an-introduction/","summary":"I encourage developers to see the value of unit testing; I urge them to get into the habit of writing structured tests alongside their code. — CodingHorror\n I was reading about Unit Testing and found a blog entry in CodingHorror called I Pity The Fool Who Doesn\u0026rsquo;t Write Unit Tests, and guess what, he\u0026rsquo;s right.\nSadly, I\u0026rsquo;ve encountered a lot of people who doesn\u0026rsquo;t write tests for their code, and have a CI System (Travis, Jenkins, Gitlab, etc.","title":"Unit Testing Basics With Python"},{"content":"Why? Sometimes you don\u0026rsquo;t want output at all to your screen, there are times when I\u0026rsquo;m writing a function that prints something to the screen, the ideal thing (at least for me) is to use return instead of print() of course, but let\u0026rsquo;s say that you don\u0026rsquo;t have another option and when it comes to Unit Testing it becomes annoying. How can we suppress STDOUT?\nRedirecting STDOUT We\u0026rsquo;re writing a simple script that greets someone:\ndef salute(name): \u0026#34;\u0026#34;\u0026#34;Says hi to someone.\u0026#34;\u0026#34;\u0026#34; print(\u0026#39;Hi, {}!\u0026#39;.format(name)) Let\u0026rsquo;s run this code shall we? We\u0026rsquo;ll import it and use the function salute(name):\n\u0026gt;\u0026gt;\u0026gt; from say_hi import salute \u0026gt;\u0026gt;\u0026gt; salute(\u0026#39;Anne\u0026#39;) Hi, Anne! It works great. Now, to suppress the output of salute(name) we\u0026rsquo;re going to redirect sys.stdout into a StringIO to capture any text output:\n# remember to import io and sys import io import sys def salute(name): \u0026#34;\u0026#34;\u0026#34;Says hi to someone.\u0026#34;\u0026#34;\u0026#34; print(\u0026#39;Hi, {}!\u0026#39;.format(name)) # create a text trap and redirect stdout text_trap = io.StringIO() sys.stdout = text_trap # execute our now mute function salute(\u0026#39;Anne\u0026#39;) # now restore stdout function sys.stdout = sys.__stdout__ If we execute this python code, no output will be displayed. This is useful when we are Unit Testing and we want to suppress print() output. If we want to check the captured test we can do this with .getvalue():\n-- SNIP -- # getting trapped print print(\u0026#39;Captured text:\u0026#39;) print(text_trap.getvalue()) Result:\nCaptured text: Hi, Anne! Unit Testing print() If we have a module that prints something to the screen instead of returning a value, we can test that print() string with the method above. First, let\u0026rsquo;s remove everything from our previous code, leaving only our salute method:\ndef salute(name): \u0026#34;\u0026#34;\u0026#34;Says hi to someone.\u0026#34;\u0026#34;\u0026#34; print(\u0026#39;Hi, {}!\u0026#39;.format(name)) Using the code above, what we want to do is to test if the method salute(name) always prints a greeting in the following format: Hi, __name__! (keep in mind that the function print() always insert a new line, or \\n, at the end)\nLet\u0026rsquo;s setup our Unit Test:\nimport unittest from say_hi import salute class TestSayHi(unittest.TestCase): \u0026#34;\u0026#34;\u0026#34;Tests for say_hi.py\u0026#34;\u0026#34;\u0026#34; def setUp(self): self.name = \u0026#39;Anne\u0026#39; def test_salute(self): self.assertEqual(salute(self.name), \u0026#39;Hi, Anne!\\n\u0026#39;) if __name__ == \u0026#39;__main__\u0026#39;: unittest.main() When we execute our code we obtain the following error:\n$ python test_say_hi.py Hi, Anne! F ====================================================================== FAIL: test_salute (__main__.TestSayHi) ---------------------------------------------------------------------- Traceback (most recent call last): File \u0026#34;test_say_hi.py\u0026#34;, line 12, in test_salute self.assertEqual(salute(self.name), \u0026#39;Hi, Anne!\\n\u0026#39;) AssertionError: None != \u0026#39;Hi, Anne!\\n\u0026#39; ---------------------------------------------------------------------- Ran 1 test in 0.000s Why our assertion didn\u0026rsquo;t work here? salute('Anne') returns 'Hi, Anne!\\n', right? It doesn\u0026rsquo;t. Print doesn\u0026rsquo;t return anything actually, it just prints something to the screen (to put it mildly), but it doesn\u0026rsquo;t returns a string.\nBut with our previous technique we can capture the string and store it in a value so we can compare our print string:\nimport sys import unittest import io from say_hi import salute class TestSayHi(unittest.TestCase): \u0026#34;\u0026#34;\u0026#34;Tests for say_hi.py\u0026#34;\u0026#34;\u0026#34; def setUp(self): self.name = \u0026#39;Anne\u0026#39; def test_salute(self): \u0026#34;\u0026#34;\u0026#34;Test print in salute().\u0026#34;\u0026#34;\u0026#34; # create a trap text_trap = io.StringIO() sys.stdout = text_trap salute(self.name) # restore stdout sys.stdout = sys.__stdout__ self.assertEqual(text_trap.getvalue(), \u0026#39;Hi, Anne!\\n\u0026#39;) if __name__ == \u0026#39;__main__\u0026#39;: unittest.main() Inspecting test_salute() we redirect our print to our text_trap, and after restoring stdout to its original functionality we can compare the value of text_trap.getvalue() with our expected output: 'Hi, Anne!\\n':\n. ---------------------------------------------------------------------- Ran 1 test in 0.000s OK And it works correctly, we can now compare our print value with an unit test, plus it doesn\u0026rsquo;t print anything in our tests.\nUPDATE There\u0026rsquo;s a much nicer and safer way to do this instead of rewriting stdout, we can use a Context Manager temporarily redirect sys.stdout without touching it.\nimport io from contextlib import redirect_stdout def salute(name): \u0026#34;\u0026#34;\u0026#34;Says hi to someone.\u0026#34;\u0026#34;\u0026#34; print(\u0026#39;Hi, {}!\u0026#39;.format(name)) # set a trap and redirect stdout trap = io.StringIO() with redirect_stdout(trap): salute(\u0026#39;Anne\u0026#39;) # getting redirected output captured_stdout = trap.getvalue() print(captured_stdout) And if we\u0026rsquo;re going to test salute(name):\nimport io import unittest from say_hi import salute from contextlib import redirect_stdout class TestSayHi(unittest.TestCase): \u0026#34;\u0026#34;\u0026#34;Tests for say_hi.py\u0026#34;\u0026#34;\u0026#34; def setUp(self): self.name = \u0026#39;Anne\u0026#39; def test_salute(self): \u0026#34;\u0026#34;\u0026#34;Test print in salute().\u0026#34;\u0026#34;\u0026#34; # create a trap text_trap = io.StringIO() with redirect_stdout(text_trap): salute(self.name) self.assertEqual(text_trap.getvalue(), \u0026#39;Hi, Anne!\\n\u0026#39;) if __name__ == \u0026#39;__main__\u0026#39;: unittest.main() This is a much safer and cleaner approach, have fun!\n","permalink":"https://codingdose.info/posts/supress-print-output-in-python/","summary":"Why? Sometimes you don\u0026rsquo;t want output at all to your screen, there are times when I\u0026rsquo;m writing a function that prints something to the screen, the ideal thing (at least for me) is to use return instead of print() of course, but let\u0026rsquo;s say that you don\u0026rsquo;t have another option and when it comes to Unit Testing it becomes annoying. How can we suppress STDOUT?\nRedirecting STDOUT We\u0026rsquo;re writing a simple script that greets someone:","title":"Suppress Print Output in Python"},{"content":"These are my study notes on Classes and special methods. As always, if something is wrong then you can always correct me, it would help me and everybody else.\nClass \u0026lsquo;Employee\u0026rsquo; example This is the example class that we\u0026rsquo;re going to use, a class Employee which we will inherit from to create more classes like Manager and Supervisor:\nclass Employee(object): \u0026#34;\u0026#34;\u0026#34;Class simulating an Employee with basic attributes.\u0026#34;\u0026#34;\u0026#34; total_employees = 0 def __init__(self, name, rate, position): self.name = name self.rate = rate self.owed = 0 self.position = position Employee.total_employees += 1 Let\u0026rsquo;s break this up a little bit, we\u0026rsquo;re creating the class Employee that holds a variable total_employee and every time we initialize our class we will add a +1 to our total employee number. This initialization of this class will require a name, rate and position. Remember that the keyword self refers to the class itself, these variables are shared in the whole class. So far, so good.\nSave this code as class_example.py and import it in python:\n\u0026gt;\u0026gt;\u0026gt; from class_example import Employee \u0026gt;\u0026gt;\u0026gt; employee1 = Employee(\u0026#39;Anne\u0026#39;, 20, \u0026#39;manager\u0026#39;) \u0026gt;\u0026gt;\u0026gt; employee1.name \u0026#39;Anne\u0026#39; \u0026gt;\u0026gt;\u0026gt; employee1.rate 20 \u0026gt;\u0026gt;\u0026gt; employee1.position \u0026#39;manager\u0026#39; \u0026gt;\u0026gt;\u0026gt; Now we can instantiate our class in employee1 and inspect the employee\u0026rsquo;s name, rate and position.\nInitializing a class with __init__ This method let us initialize an instance of our class Employee with x = Class(Args). This way, every time we want time instantiate an object then we have to pass parameters that will act as attributes to out class object. you can see en the last line that every time we initialize our class it will bump the number of total_employees, let\u0026rsquo;s check this out:\n\u0026gt;\u0026gt;\u0026gt; from class_example import Employee \u0026gt;\u0026gt;\u0026gt; emp1 = Employee(\u0026#39;Anne\u0026#39;, 20, \u0026#39;manager\u0026#39;) \u0026gt;\u0026gt;\u0026gt; Employee.total_employees 1 \u0026gt;\u0026gt;\u0026gt; emp2 = Employee(\u0026#39;John\u0026#39;, 25, \u0026#39;supervisor\u0026#39;) \u0026gt;\u0026gt;\u0026gt; Employee.total_employees 2 \u0026gt;\u0026gt;\u0026gt; emp3 = Employee(\u0026#39;Fran\u0026#39;, 30, \u0026#39;developer\u0026#39;) \u0026gt;\u0026gt;\u0026gt; Employee.total_employees 3 See? Every time a new employee is created, the number of employees goes up.\nChanging the String representation with __repr__ If we check the string representation of our class then we will have something like this:\n\u0026gt;\u0026gt;\u0026gt; emp1 \u0026lt;class_example.Employee object at 0x7f3809baec50\u0026gt; To give a more accurate description of our object, we can change the default representation to a string that describe the initialized class more appropriately re-defining the __repr__ method:\nclass Employee(object): \u0026#34;\u0026#34;\u0026#34;Class simulating an Employee with basic attributes.\u0026#34;\u0026#34;\u0026#34; total_employees = 0 -- SNIP -- def __repr__(self): return \u0026#39;Employee object with basic attributes\u0026#39; Now we can check again to see how our string representation gets displayed:\n\u0026gt;\u0026gt;\u0026gt; emp1 Employee object with basic attributes \u0026gt;\u0026gt;\u0026gt; Adding more functionality to our class Let\u0026rsquo;s add hours worked to our employee and how much do we owe to our employee:\nclass Employee(object): -- SNIP -- def add_hours_worked(self, total_hours): self.owed = total_hours * self.rate print(\u0026#39;{}work hours where added to {}\u0026#39;.format(total_hours, self.name)) def pay(self): print(\u0026#39;Employee {}was paid with {}USD\u0026#39;.format(self.name, self.owed)) self.owed = 0 with Employee.add_hours_worked(total_hours) method we calculate how much we have to pay to our employee based on the total_hours he has worked. And with the method pay() we let our user know that we have payed the owed amount to our employee and now we owe nothing to him until he has completed more working hours.\nInheritance Now that we have defined a very generic Employee, we\u0026rsquo;re going to define other classes such as Developer and Manager that inherits the same properties and methods of the class Employee but change a few ones like the string representation and rate, also let\u0026rsquo;s add a bonus to his payment.\nWe\u0026rsquo;re rewriting add_working_hours() to add a bonus system, if this developer has worked more than 150 hours in a month then a bonus will be added, but we\u0026rsquo;re not rewriting the whole method,\nclass Developer(Employee): \u0026#34;\u0026#34;\u0026#34;Employee in charge of development and bug-crushing activities.\u0026#34;\u0026#34;\u0026#34; def __init__(self, name, rate, position): super().__init__(name, rate, position) self.bonus = 300 def __repr__(self): return self.__doc__ def add_working_hours(self, total_hours): self.owed = total_hours * self.rate if total_hours \u0026gt; 150: self.owed += self.bonus print(\u0026#39;{}work hours were added to {}\u0026#39;.format(total_hours, self.name)) This method let us inherits all variables, initializing values and even methods, but what have we changed here exactly? Well, we added a bonus of 300 USD as initialization attribute, also we changed the string representation to the special method __doc__, this means that whenever we inspect the our object, it will return the docstring as our representation:\n\u0026gt;\u0026gt;\u0026gt; dev_emp = Developer(\u0026#39;Anne\u0026#39;, 20, \u0026#39;developer\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dev_emp Employee in charge of development and bug-crushing activities. We also modified add_hours_worked() and added a simple control flow\n... if total_hours \u0026gt; 150: self.owed += self.bonus ... As you can see here, if our developer has reached more than 150 hours, he can have a bonification of 300USD to his payment, we can see that in action:\n\u0026gt;\u0026gt;\u0026gt; dev_emp = Developer(\u0026#39;Anne\u0026#39;, 20, \u0026#39;developer\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dev_emp2 = Developer(\u0026#39;Johnny\u0026#39;, 20, \u0026#39;developer\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dev_emp.add_hours_worked(160) 160 work hours where added to Anne \u0026gt;\u0026gt;\u0026gt; dev_emp2.add_hours_worked(130) 130 work hours where added to Johnny \u0026gt;\u0026gt;\u0026gt; dev_emp.pay() Employee Anne was paid with 3500 USD \u0026gt;\u0026gt;\u0026gt; dev_emp2.pay() Employee Johnny was paid with 2600 USD The developer \u0026lsquo;Anne\u0026rsquo; and \u0026lsquo;Johnny\u0026rsquo; earn 20 USD per hour of work, as \u0026lsquo;Anne\u0026rsquo; reaches 160 hours of work in her month then she can have a bonification (160 * 20 + 300 = 3500), \u0026lsquo;Johnny\u0026rsquo; instead, worked only 130 hours this month, he cannot have his bonification (130 * 20 = 2600).\nBut have you realized that we payed our Developers and we didn\u0026rsquo;t set a pay() method to our Developer class? This is because we inherits the methods found in Employee class so we don\u0026rsquo;t need to rewrite them.\nChanging our values directly We can change our class values directly, for example, if we want to change our employee\u0026rsquo;s names we can do that easily with:\n\u0026gt;\u0026gt;\u0026gt; dev_emp = Developer(\u0026#39;Anne\u0026#39;, 20, \u0026#39;developer\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dev_emp.name = \u0026#39;Not Anne\u0026#39; \u0026gt;\u0026gt;\u0026gt; dev_emp.name \u0026#39;Not Anne\u0026#39; Other Special Methods Class.__getattrib__(attribute) to return the attribute of a class, beware though, this is the same as Class.attribute which is shorter:\n\u0026gt;\u0026gt;\u0026gt; dev_emp.__getattribute__(\u0026#39;rate\u0026#39;) 20 \u0026gt;\u0026gt;\u0026gt; dev_emp.rate 20 Class.__dict__ to return a dictionary of the Class' attributes:\n\u0026gt;\u0026gt;\u0026gt; dev_emp.__dict__ {\u0026#39;name\u0026#39;: \u0026#39;not anne\u0026#39;, \u0026#39;rate\u0026#39;: 20, \u0026#39;owed\u0026#39;: 0, \u0026#39;position\u0026#39;: \u0026#39;developer\u0026#39;, \u0026#39;bonus\u0026#39;: 300} Class.__module__ to return the module that holds that Class:\n\u0026gt;\u0026gt;\u0026gt; dev_emp.__module__ \u0026#39;class_example\u0026#39; Set attributes to a Class Setting an attribute to our class is easy: Class.new_attribute = value. Let\u0026rsquo;s set the attribute active so we can know if our employee Developer is currently active:\n\u0026gt;\u0026gt;\u0026gt; from class_example import * \u0026gt;\u0026gt;\u0026gt; dev_emp = Developer(\u0026#39;Anne\u0026#39;, 20, \u0026#39;developer\u0026#39;) \u0026gt;\u0026gt;\u0026gt; dev_emp.active = True \u0026gt;\u0026gt;\u0026gt; dev_emp.active True These are my study notes for today, any questions feel free to ask. Here\u0026rsquo;s the full code used in the examples:\nclass Employee(object): \u0026#34;\u0026#34;\u0026#34;Class simulating an Employee with basic attributes.\u0026#34;\u0026#34;\u0026#34; total_employees = 0 def __init__(self, name, rate, position): self.name = name self.rate = rate self.owed = 0 self.position = position Employee.total_employees += 1 def add_working_hours(self, total_hours): self.owed = total_hours * self.rate print(\u0026#39;{}work hours where added to {}\u0026#39;.format(total_hours, self.name)) def pay(self): print(\u0026#39;Employee {}was paid with {}USD\u0026#39;.format(self.name, self.owed)) def __repr__(self): return self.__doc__ class Developer(Employee): \u0026#34;\u0026#34;\u0026#34;Employee in charge of development and bug-crushing activities.\u0026#34;\u0026#34;\u0026#34; def __init__(self, name, rate, position): super().__init__(name, rate, position) self.bonus = 300 def __repr__(self): return self.__doc__ def add_working_hours(self, total_hours): self.owed = total_hours * self.rate if total_hours \u0026gt; 150: self.owed += self.bonus print(\u0026#39;{}work hours were added to {}\u0026#39;.format(total_hours, self.name)) ","permalink":"https://codingdose.info/posts/class-inheritance-with-python/","summary":"These are my study notes on Classes and special methods. As always, if something is wrong then you can always correct me, it would help me and everybody else.\nClass \u0026lsquo;Employee\u0026rsquo; example This is the example class that we\u0026rsquo;re going to use, a class Employee which we will inherit from to create more classes like Manager and Supervisor:\nclass Employee(object): \u0026#34;\u0026#34;\u0026#34;Class simulating an Employee with basic attributes.\u0026#34;\u0026#34;\u0026#34; total_employees = 0 def __init__(self, name, rate, position): self.","title":"Classes and Special Methods in Python"},{"content":"To get the filesize of a download is really easy, servers usually provide a Content-Length in its header response that let us know how heavy is the content we are requesting. We can find out this content length opening our shell and requesting a HEAD response in linux:\n\nAs you can see, our content length is display in bytes. Let\u0026rsquo;s try to get this response with Requests\nDisplay Content-Length with Requests Let\u0026rsquo;s use an image from httpbin. Remember to make a HEAD request instead of a GET request, this way we don\u0026rsquo;t have to download the entire file\nimport requests \u0026gt;\u0026gt;\u0026gt; req = requests.head(\u0026#39;https://httpbin.org/image/png\u0026#39;) \u0026gt;\u0026gt;\u0026gt; req.headers[\u0026#39;Content-Length\u0026#39;] \u0026#39;8090\u0026#39; When we filter the header with the Content-Length key we can see how heavy our request will be without having to actually download the file.\nCalculating the file-size without Content-Length This is a tricky one because, well, not always every web server is going to provide you with a Content-Length in its header, this happens sometimes with csv, xml, and other text file-types, luckily they\u0026rsquo;re not that heavy and we can provide a GET request to know the download size\n\u0026gt;\u0026gt;\u0026gt; req = requests.get(\u0026#39;http://data.example.com/attachment_id=12\u0026#39;) \u0026gt;\u0026gt;\u0026gt; len(req.content) 659 As we can see here, we can calculate a file size (mostly text files) using the len() method. If you have a better alternative you can show it off in the comments bellow.\n","permalink":"https://codingdose.info/posts/get-a-download-size-with-requests/","summary":"To get the filesize of a download is really easy, servers usually provide a Content-Length in its header response that let us know how heavy is the content we are requesting. We can find out this content length opening our shell and requesting a HEAD response in linux:\n\nAs you can see, our content length is display in bytes. Let\u0026rsquo;s try to get this response with Requests\nDisplay Content-Length with Requests Let\u0026rsquo;s use an image from httpbin.","title":"Figure Out a Download File-Size With Requests"},{"content":"TL;DR: Environment Variables API keys are one example of sensitive information that should remain secret, the problem is that we need to use them in our code to access third-party services like Twitter, Github, DigitalOcean and so on, so how do we manage to use those API keys without hard-coding them into the source code?\n The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; unlike config files, there is little chance of them being checked into the code repo accidentally; and unlike custom config files, or other config mechanisms such as Java System Properties, they are a language- and OS-agnostic standard. — Twelve-Factor App On Configuration\n The answer is: Environment Variables. This is based on the Twelve-Factor App methodology which I recommend you to read, it is an excellent essay that will teach you the process of Software-as-a-Service (SaaS). Let\u0026rsquo;s dive into it using Python and Ruby as an example.\nWhy this matters Let\u0026rsquo;s put this short and easy, let\u0026rsquo;s say that we have two type of credentials to access certain web service:\n an API id = \u0026lsquo;846a4da4d06as84d6as84d06\u0026rsquo; and a SECRET id = \u0026lsquo;secret_id_so_secret\u0026rsquo;  If we want to use those credentials we could hard code them into our software like this\n# Setting credentials api_id = \u0026#39;846a4da4d06as84d6as84d06\u0026#39; secret_id = \u0026#39;secret_id_so_secret\u0026#39; # login function def login(api_key, secret): print(\u0026#39;Logged with:\u0026#39;) print(\u0026#39;API: \u0026#39; + api_id) print(\u0026#39;SECRET: \u0026#39; + secret_id) # Logging in to third party app with hard-coded credentials login(api_id, secret_id) That would be extremely wrong because anyone that is able laid eyes on your code will have the power to steal your credentials and access sensitive information about you, your software and even Personally Identifiable Information (PII) about your customers, your team, or anybody who uses your software.\nBut, how do we remove these credentials from our code and use them as environment variables?\nSecurely Storing Credentials in Python python-dotenv is a very easy to use package that reads a key=value pair from a text file .env (hence dotenv) and load those variables to your environment variables so you can use your API keys securely in your code. You can install python-dotenv with pipenv.\nNow let\u0026rsquo;s move the API credentials from our code to a safe .env text file:\nAdd a Git exclusion for .env First we must add an exclusion to our .gitignore in git so they are not uploaded to Github, commit your exclusions and you\u0026rsquo;re ready to fill your .env credentials\necho \u0026#34;.env\u0026#34; \u0026gt;\u0026gt; .gitignore git commit .gitignore -m \u0026#34;add exclusion for .env\u0026#34; Install python-dotenv with Pipenv $ pipenv install python-dotenv # Output Installing python-dotenv… -- SNIP -- Adding python-dotenv to Pipfile\u0026#39;s [packages]… Pipfile.lock not found, creating… Locking [dev-packages] dependencies… Locking [packages] dependencies… Updated Pipfile.lock (446908)! Installing dependencies from Pipfile.lock (446908)… 🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 2/2 — 00:00:00 To activate this project\u0026#39;s virtualenv, run the following: $ pipenv shell Adding credentials Now let\u0026rsquo;s strip our credentials from our code and add them to our dotenv file:\n# .env file api_id = '846a4da4d06as84d6as84d06' secret_id = 'secret_id_so_secret' Loading credentials to our environment variables Let\u0026rsquo;s modify our code to add the python-dotenv package\n# import functionality to find and load dotenv credentials # and getenv to get environment variables from OS from os import getenv from dotenv import load_dotenv, find_dotenv # load environment keys, it will automatically find and load .env load_dotenv(find_dotenv()) # login function def login(api_key, secret): print(\u0026#39;Logged with:\u0026#39;) print(\u0026#39;API: \u0026#39; + api_id) print(\u0026#39;SECRET: \u0026#39; + secret_id) # Logging in to third party app with environment variable credentials api_id = getenv(\u0026#39;api_id\u0026#39;) secret_id = getenv(\u0026#39;secret_id\u0026#39;) login(api_id, secret_id) Now when we run our code we will successfully access our service with our credentials without compromising our keys\n# Output Logged with: API: 846a4da4d06as84d6as84d06 SECRET: secret_id_so_secret Ruby Example Add dotenv to your Gemfile and remember to add your credentials to your .env file\nsource \u0026#39;https://rubygems.org\u0026#39; gem \u0026#39;dotenv\u0026#39; Import it and use it in your code as follows\nrequire \u0026#39;dotenv\u0026#39; # load dotenv credentials Dotenv.load api_id = ENV[\u0026#39;api_id\u0026#39;] secret_id = ENV[\u0026#39;secret_id\u0026#39;] puts \u0026#34;API: #{api_id}\u0026#34; puts \u0026#34;SECRET: #{secret_id}\u0026#34; Now execute your code\nbundle exec ruby test.rb # Output API: 846a4da4d06as84d6as84d06 SECRET: secret_id_so_secret ","permalink":"https://codingdose.info/posts/how-to-securely-store-sensitive-configuration-with-dotenv/","summary":"TL;DR: Environment Variables API keys are one example of sensitive information that should remain secret, the problem is that we need to use them in our code to access third-party services like Twitter, Github, DigitalOcean and so on, so how do we manage to use those API keys without hard-coding them into the source code?\n The twelve-factor app stores config in environment variables (often shortened to env vars or env).","title":"How to Securely Store Sensitive Configuration With Dotenv"},{"content":"Pyenv is an excellent tool to have in your tool-set, it manages Python versions much like rbenv for Ruby, in fact it was forked from it.\n pyenv lets you easily switch between multiple versions of Python. It\u0026rsquo;s simple, unobtrusive, and follows the UNIX tradition of single-purpose tools that do one thing well.\n Installation The automatic installer provided in GitHub will take care of everything so you don\u0026rsquo;t have to worry about configuring anything. (pro-tip: triple click to select the whole line).\ncurl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash That\u0026rsquo;s the only thing you have to do, if you\u0026rsquo;re having issues installing a python version then you will have to install the development libraries:\nsudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \\ libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \\ xz-utils tk-dev Usage Install another version: pyenv install 2.7.14 # Output Downloading Python-3.6.4.tar.xz... -\u0026gt; https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tar.xz Installing Python-3.6.4... Installed Python-3.6.4 to /home/franccesco/.pyenv/versions/3.6.4 Check installed versions pyenv versions # Output system * 2.7.14 (set by /home/franccesco/.pyenv/version) 3.6.4 Check current version pyenv version # Output 2.7.14 (set by /home/franccesco/.pyenv/version) Set global version pyenv global 3.6.4 # No output; check with version command. Set python version per directory pyenv local 2.7.14 # No output; check with version command. A file named .python-version should contain the python version to use uppon entering the folder: cat .python-version\n# Output 2.7.14 Uninstall a version pyenv uninstall 2.7.14 # Output pyenv: remove /home/franccesco/.pyenv/versions/2.7.14? yes Update Pyenv pyenv update # Output Updating /home/franccesco/.pyenv... From https://github.com/pyenv/pyenv * branch master -\u0026gt; FETCH_HEAD Already up-to-date. -- SNIP -- Updating /home/franccesco/.pyenv/plugins/pyenv-virtualenv... From https://github.com/yyuu/pyenv-virtualenv * branch master -\u0026gt; FETCH_HEAD Already up-to-date. Updating /home/franccesco/.pyenv/plugins/pyenv-which-ext... From https://github.com/yyuu/pyenv-which-ext * branch master -\u0026gt; FETCH_HEAD Already up-to-date. Final Words Pyenv has a great integration with Pipenv and you can manage Python versions and package versions with both of them at the same time without any hassle, I greatly recommend you to check them out.\nRead more | Pyenv Repository\n","permalink":"https://codingdose.info/posts/manage-python-versions-with-pyenv/","summary":"Pyenv is an excellent tool to have in your tool-set, it manages Python versions much like rbenv for Ruby, in fact it was forked from it.\n pyenv lets you easily switch between multiple versions of Python. It\u0026rsquo;s simple, unobtrusive, and follows the UNIX tradition of single-purpose tools that do one thing well.\n Installation The automatic installer provided in GitHub will take care of everything so you don\u0026rsquo;t have to worry about configuring anything.","title":"Manage Python Versions With Pyenv"},{"content":"One thing that puzzled me as a newbie (disclaimer: I still am) are accessors in Ruby, more commonly known as setters and getters or explicitly described as attr_reader, attr_writer and attr_accessor. Now let\u0026rsquo;s dive into the code first and describe the concepts of accessors after we\u0026rsquo;re done with coding.\nInitializing a Class Let\u0026rsquo;s say we want to create a class to resemble a Person with a name, and finally let\u0026rsquo;s try to access that name outside the class:\n# Class definition class Person def initialize(name) @name = name end end # Initialize class and inspect it p1 = Person.new(\u0026#39;Anne\u0026#39;) # Can\u0026#39;t access \u0026#39;name\u0026#39; variable outside class puts p1.name But it seems that we can\u0026rsquo;t access the variable @name even though we initialized right:\n# Output: test.rb:13:in `\u0026lt;main\u0026gt;\u0026#39;: undefined method `name\u0026#39; for #\u0026lt;Person:0x00000002278100 @name=\u0026#34;name\u0026#34;\u0026gt; (NoMethodError) This is because we have to define a method to access the variable inside the class.\nGetters and Setters Let\u0026rsquo;s grab our code and add two methods:\n A method to update the person\u0026rsquo;s name And a method to read the person\u0026rsquo;s name  # Class definition class Person def initialize(name) @name = name end # Update \u0026#39;name\u0026#39; # use =() to make a method behave # like an attribute assignment def name=(name) @name = name end # Get value of \u0026#39;name\u0026#39; def name @name end end # Initialize class and the value \u0026#39;name\u0026#39; p1 = Person.new(\u0026#39;Anne\u0026#39;) puts p1.name # Change \u0026#39;name\u0026#39; p1.name = \u0026#39;Johnson\u0026#39; puts p1.name # Output Anne Johnson Now this code works perfectly, but a person doesn\u0026rsquo;t have just one attribute like name, they have age, height, eye color, skin color, hair color. Could you imagine writing endless methods about each attribute? Gladly we have accessors\nWrite and Read Accessor Before we disclose the technical concept of accessors let\u0026rsquo;s first try them, shall we? We\u0026rsquo;re changing our code into a more sophisticated and short one with attr_accessor:\n# Class definition class Person attr_accessor :name def initialize(name) @name = name end end # Initialize class and the value \u0026#39;name\u0026#39; p1 = Person.new(\u0026#39;Anne\u0026#39;) puts p1.name # Change name p1.name = \u0026#39;Johnson\u0026#39; puts p1.name # Output Anne Johnson What happened here? We wrote an accessor that allow us to read and update the attribute name in the class Person, this way we can add more attributes like height and weight in a single line:\n# Class definition class Person attr_accessor :name, :height, :weight def initialize(name) @name = name end end # Initialize class with a name and # add values outside the class p1 = Person.new(\u0026#39;Anne\u0026#39;) p1.height = \u0026#39;1.80m\u0026#39; p1.weight = \u0026#39;180lbs\u0026#39; puts \u0026#34;Name: #{p1.name}\u0026#34; puts \u0026#34;Height: #{p1.height}\u0026#34; puts \u0026#34;Weight: #{p1.weight}\u0026#34; # Output Name: Anne Height: 1.80m Weight: 180lbs Write-Only Accessor Now that\u0026rsquo;s a lot easier than to write and read each method, but what about if want to set an attribute but not read them? For example, a person can have many thoughts but no one else can read them, let\u0026rsquo;s give this person a thoughts attribute and let\u0026rsquo;s try to access this person thoughts outside the class:\n# Class definition class Person attr_accessor :name, :height, :weight attr_writer :thoughts def initialize(name) @name = name end end # Initialize class with a name and # add values outside the class p1 = Person.new(\u0026#39;Anne\u0026#39;) p1.height = \u0026#39;1.80m\u0026#39; p1.weight = \u0026#39;180lbs\u0026#39; # Set a thought and inspect the class p1.thoughts = \u0026#39;pizza \u0026lt;3\u0026#39; puts p1.inspect puts \u0026#34;Name: #{p1.name}\u0026#34; puts \u0026#34;Height: #{p1.height}\u0026#34; puts \u0026#34;Weight: #{p1.weight}\u0026#34; # Try to access that thought puts \u0026#34;#{p1.name}is thinking about: #{p1.thoughts}\u0026#34; # Output #\u0026lt;Person:0x00000000e5f538 @name=\u0026#34;Anne\u0026#34;, @height=\u0026#34;1.80 meters\u0026#34;, @weight=\u0026#34;180 lbs\u0026#34;, @thoughts=\u0026#34;pizza \u0026lt;3\u0026#34;\u0026gt; Name: Anne Height: 1.80 meters Weight: 180 lbs test.rb:26:in `\u0026lt;main\u0026gt;\u0026#39;: undefined method `thoughts\u0026#39; for #\u0026lt;Person:0x00000000e5f538\u0026gt; (NoMethodError) See? We are able to set a value on the thought attribute, but outside the class we are unable to read it (line 26)\nRead-Only Accessor Now, a person have a lot of things that they can\u0026rsquo;t change\u0026hellip; not by conventional means at least. Let\u0026rsquo;s add a read-only attribute to this person with attr_reader, like eye_color in line 5 and initialize it:\n# Class definition class Person attr_accessor :name, :height, :weight attr_writer :thoughts # read-only accessor attr_reader :eye_color # initialize eye_color def initialize(name, eye_color) @name = name @eye_color = eye_color end end # Initialize class with a name and # add values outside the class p1 = Person.new(\u0026#39;Anne\u0026#39;, \u0026#39;Green\u0026#39;) p1.height = \u0026#39;1.80m\u0026#39; p1.weight = \u0026#39;180lbs\u0026#39; p1.thoughts = \u0026#39;pizza \u0026lt;3\u0026#39; puts \u0026#34;Name: #{p1.name}\u0026#34; puts \u0026#34;Height: #{p1.height}\u0026#34; puts \u0026#34;Weight: #{p1.weight}\u0026#34; # Read eye color puts \u0026#34;Eye Color: #{p1.eye_color}\u0026#34; # Output Name: Anne Height: 1.80m Weight: 180lbs Eye Color: Green As you can see there\u0026rsquo;s no problem in initializing and accessing the eye_color attribute, but as soon as we try to change it with p1.eye_color = 'red', we can expect the following response:\ntest.rb:30:in `\u0026lt;main\u0026gt;\u0026#39;: undefined method `eye_color=\u0026#39; for #\u0026lt;Person:0x000000019defa8\u0026gt; (NoMethodError) That, of course, is because we set the accessor as read only.\nConclusion and Technical Definition I cannot describe better what an accessor is more than the definition found in the official Ruby user\u0026rsquo;s guide:\n An object\u0026rsquo;s instance variables are its attributes, the things that generally distinguish it from other objects of the same class. It is important to be able to write and read these attributes; doing so requires writing methods called attribute accessors.\n Basically, attr_accessor, attr_writer and attr_reader is a short way to define and set class variables without having to define the methods to access and read them outside the class, with:\n attr_accessor we are able to read and write values outside the class. attr_writer allow us to write values without being able to read them outside the class. and with attr_reader we can initialize and read the class attributes without being able to reassign it.  You can read more about accessors and how useful they are in the following documentation:\n Ruby User\u0026rsquo;s Guide on Accessors The Pragmatic Programmers Guide Codecademy - Accessors  ","permalink":"https://codingdose.info/posts/accessors-in-ruby/","summary":"One thing that puzzled me as a newbie (disclaimer: I still am) are accessors in Ruby, more commonly known as setters and getters or explicitly described as attr_reader, attr_writer and attr_accessor. Now let\u0026rsquo;s dive into the code first and describe the concepts of accessors after we\u0026rsquo;re done with coding.\nInitializing a Class Let\u0026rsquo;s say we want to create a class to resemble a Person with a name, and finally let\u0026rsquo;s try to access that name outside the class:","title":"Understanding Accessors in Ruby"},{"content":"Install Rails Install the gem:\n gem install rails  Create New Rails Project Create a new project and cd into it:\n rails new ProjectTest cd ProjectTest  Change Gemfile to add PostgreSQL Heroku works with PostgreSQL as backend database as it doesn\u0026rsquo;t support SQLite3, so you\u0026rsquo;ll have to add the pg gem in the Gemfile in a production group:\ngroup :production do gem \u0026#39;pg\u0026#39; end IMPORTANT: After adding PostgreSQL to the production group in the Gemfile you\u0026rsquo;ll have to move the SQLite3 gem to a development group or delete it, if you work with PostgreSQL just delete it entirely but if you would like to keep SQLite3 for local development then move the gem to a dev group like this:\ngroup :development do # Access an IRB console on exception pages or by using \u0026lt;%= console %\u0026gt; anywhere in the code. gem \u0026#39;web-console\u0026#39;, \u0026#39;\u0026gt;= 3.3.0\u0026#39; gem \u0026#39;listen\u0026#39;, \u0026#39;\u0026gt;= 3.0.5\u0026#39;, \u0026#39;\u0026lt; 3.2\u0026#39; # Spring speeds up development by keeping your application running in the background. Read more: https://github.com/rails/spring gem \u0026#39;spring\u0026#39; gem \u0026#39;spring-watcher-listen\u0026#39;, \u0026#39;~\u0026gt; 2.0.0\u0026#39; # Use sqlite3 as the database for Active Record gem \u0026#39;sqlite3\u0026#39; end This way Heroku won\u0026rsquo;t touch the SQLite3 gem.\nUpdate Bundler Lets update Bundler and install the gems excluding the production group:\n bundle update bundle install --without production  Install Heroku CLI Open the terminal and proceed with the installation script:\n wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh  After that, enter your Heroku credentials:\n heroku login  And add you ssh keys:\n heroku keys:add  We\u0026rsquo;ll get back to Heroku later, we have to make some changes to our code first.\nChanging the Index page Heroku doesn\u0026rsquo;t support the default index page of Rails, so let\u0026rsquo;s write a Hello World! as our index page to confirm it\u0026rsquo;s deploying correctly to Heroku.\nGenerate a controller with our index page:\n rails generate controller Welcome index  Now open app/views/welcome/index.html.erb and change your index page as you like, for example:\n\u0026lt;h1\u0026gt;Hello world!\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt;Yup! The deployment is working. Checkout \u0026lt;a href=\u0026#34;https://codingodse.info\u0026#34;\u0026gt;CodingDose()\u0026lt;/a\u0026gt;\u0026lt;/p\u0026gt; Setting the Root Page Open the routes file in config/routes.rb and change the line:\n get 'welcome/index'  For:\n root 'welcome#index'  This is how the file should look:\nRails.application.routes.draw do root \u0026#39;welcome#index\u0026#39; end Test it Locally Fire up the server with (s stands for server):\n rails s  Go to http://localhost:3000/ and your index should be displaying\nAdd and Commit files Newer versions of Rails add a git repository, unless there\u0026rsquo;s no repository initialized you will have to do it yourself with git init, then add the files and commit them:\n git add . git commit -am \u0026quot;initialize repository\u0026quot;  Deploy to Heroku First, create a domain on heroku:\n heroku create  And lets push the new repo to heroku\u0026rsquo;s servers:\n git push heroku master  Final words There you go, you setup a new Rails project, version control and a succesful deployment to Heroku :)\n","permalink":"https://codingdose.info/posts/initialize-rails-and-deploy-to-heroku/","summary":"Install Rails Install the gem:\n gem install rails  Create New Rails Project Create a new project and cd into it:\n rails new ProjectTest cd ProjectTest  Change Gemfile to add PostgreSQL Heroku works with PostgreSQL as backend database as it doesn\u0026rsquo;t support SQLite3, so you\u0026rsquo;ll have to add the pg gem in the Gemfile in a production group:\ngroup :production do gem \u0026#39;pg\u0026#39; end IMPORTANT: After adding PostgreSQL to the production group in the Gemfile you\u0026rsquo;ll have to move the SQLite3 gem to a development group or delete it, if you work with PostgreSQL just delete it entirely but if you would like to keep SQLite3 for local development then move the gem to a dev group like this:","title":"Initialize Rails and Deploy to Heroku"},{"content":"Basic usage Where FILENAME is the filename that you want to calculate\nrequire \u0026#39;digest/sha1\u0026#39; Digest::SHA1.hexdigest(FILENAME) More advanced usage Save this code as checkhash.rb, usage: checkhash.rb \u0026lt;filename\u0026gt;.\nrequire \u0026#39;digest/sha1\u0026#39; # Usage: checkhash.rb \u0026lt;filename\u0026gt; filename = ARGV.pop if filename.nil? # if no filename specified then prints help puts \u0026#39;Please specify the filename to calculate the hash\u0026#39; puts \u0026#34;Usage: #{File.basename($PROGRAM_NAME)}FILENAME\u0026#34; exit end # calculating SHA1 hash def calculate_hash(file) Digest::SHA1.hexdigest(file) end file_hash = calculate_hash(filename) puts \u0026#34;#{filename}: #{file_hash}\u0026#34; ","permalink":"https://codingdose.info/posts/calculate-filename-sha1-with-ruby/","summary":"Basic usage Where FILENAME is the filename that you want to calculate\nrequire \u0026#39;digest/sha1\u0026#39; Digest::SHA1.hexdigest(FILENAME) More advanced usage Save this code as checkhash.rb, usage: checkhash.rb \u0026lt;filename\u0026gt;.\nrequire \u0026#39;digest/sha1\u0026#39; # Usage: checkhash.rb \u0026lt;filename\u0026gt; filename = ARGV.pop if filename.nil? # if no filename specified then prints help puts \u0026#39;Please specify the filename to calculate the hash\u0026#39; puts \u0026#34;Usage: #{File.basename($PROGRAM_NAME)}FILENAME\u0026#34; exit end # calculating SHA1 hash def calculate_hash(file) Digest::SHA1.hexdigest(file) end file_hash = calculate_hash(filename) puts \u0026#34;#{filename}: #{file_hash}\u0026#34; ","title":"Calculate Filename SHA1 with Ruby"},{"content":"Before starting lets try HTTP requests with httpbin.org to test multiple HTTP methods with requests and JSON data. We\u0026rsquo;ll see how to extract data from a JSON-encoded response (e.g. {\u0026lsquo;key\u0026rsquo;: \u0026lsquo;value\u0026rsquo;})\nInstall requests library With pipenv (recommended):\npipenv install requests With pip:\npip install requests GET request Check our IP address:\nimport requests # get request, response is JSON-encoded myIP = requests.get(\u0026#39;https://httpbin.org/ip\u0026#39;) # extract value from JSON key \u0026#39;origin\u0026#39; # {\u0026#39;origin\u0026#39;: \u0026#39;38.132.120.4\u0026#39;} print(myIP.json()[\u0026#39;origin\u0026#39;]) User agent:\nimport requests user_agent = requests.get(\u0026#39;https://httpbin.org/user-agent\u0026#39;) print(user_agent.json()[\u0026#39;user-agent\u0026#39;]) Time:\nimport requests time = requests.get(\u0026#39;https://now.httpbin.org/\u0026#39;) print(time.json()[\u0026#39;now\u0026#39;][\u0026#39;rfc2822\u0026#39;]) Passing URL parameters:\nimport requests parameters = {\u0026#39;param1\u0026#39;: \u0026#39;value2\u0026#39;, \u0026#39;param2\u0026#39;: \u0026#39;value2\u0026#39;} get_params = requests.get(\u0026#39;https://httpbin.org/get\u0026#39;, params=parameters) POST request Post Request:\nimport requests post_data = requests.post(\u0026#39;https://httpbin.org/post\u0026#39;, data = {\u0026#39;hello\u0026#39;: \u0026#39;world\u0026#39;}) print(post_data) Method Status Code import requests req = requests.get(\u0026#39;https://httpbin.org/\u0026#39;) req.status_code() Authentication import requests auth = requests.get(\u0026#39;https://httpbin.org/basic-auth/user/passwd\u0026#39;, auth=(\u0026#39;user\u0026#39;, \u0026#39;passwd\u0026#39;)) Custom HTTP Headers import requests my_headers = { \u0026#39;user-agent\u0026#39;: \u0026#39;Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)\u0026#39;, \u0026#39;Referer\u0026#39;: \u0026#39;google.com\u0026#39;, \u0026#39;Cookies\u0026#39;: \u0026#39;AmA=Cookie!\u0026#39;, } req = requests.get(\u0026#39;https://httpbin.org/anything\u0026#39;, headers=my_headers) Putting everything together A script that reports an IP address geolocation, hostname and country:\nimport argparse from requests import get as GET # argparse to enable passing command line arguments # this is totally optional, it adds functionality parser = argparse.ArgumentParser(description=\u0026#39;Get IP information.\u0026#39;) parser.add_argument(\u0026#39;host\u0026#39;, help=\u0026#39;host to analize.\u0026#39;) args = parser.parse_args() # remote IP information ip_remote = GET(\u0026#39;http://ipinfo.io/\u0026#39; + args.host).json() # storing information in a dictionary for iteration ip_info = { \u0026#39;Hosname\u0026#39;: ip_remote[\u0026#39;hostname\u0026#39;], \u0026#39;Location\u0026#39;: \u0026#39;{}, {}\u0026#39;.format(ip_remote[\u0026#39;region\u0026#39;], ip_remote[\u0026#39;country\u0026#39;]), \u0026#39;Coordinates\u0026#39;: ip_remote[\u0026#39;loc\u0026#39;], \u0026#39;Organization\u0026#39;: ip_remote[\u0026#39;org\u0026#39;] } # print information about remote IP print(\u0026#39;Information for: {}\u0026#39;.format(ip_remote[\u0026#39;ip\u0026#39;])) for key, value in ip_info.items(): print(\u0026#39;{}: {}\u0026#39;.format(key, value)) ","permalink":"https://codingdose.info/posts/http-requests-in-python/","summary":"Before starting lets try HTTP requests with httpbin.org to test multiple HTTP methods with requests and JSON data. We\u0026rsquo;ll see how to extract data from a JSON-encoded response (e.g. {\u0026lsquo;key\u0026rsquo;: \u0026lsquo;value\u0026rsquo;})\nInstall requests library With pipenv (recommended):\npipenv install requests With pip:\npip install requests GET request Check our IP address:\nimport requests # get request, response is JSON-encoded myIP = requests.get(\u0026#39;https://httpbin.org/ip\u0026#39;) # extract value from JSON key \u0026#39;origin\u0026#39; # {\u0026#39;origin\u0026#39;: \u0026#39;38.","title":"HTTP Requests in Python"},{"content":"What is pipenv Essentially Pipenv is pip + virtualenv and it\u0026rsquo;s a match made in heaven. It manages dependencies, required python versions (if pyenv is available), generates pipfiles which is more reliable than a requirements.txt file and it generates a virtual environment so you don\u0026rsquo;t screw other environments and its requirements.\n It automatically creates and manages a virtualenv for your projects, as well as adds/removes packages from your Pipfile as you install/uninstall packages. It also generates the ever–important Pipfile.lock, which is used to produce deterministic builds. — From Pipenv Repository\n and it is officially recommended by Python.org:\n While pip alone is often sufficient for personal use, Pipenv is recommended for collaborative projects as it’s a higher-level tool that simplifies dependency management for common use cases. — Python.org\n Installation Using pip\npip install --user pipenv Create environment Use argument --two or --three to specify the environment\u0026rsquo;s python version.\npipenv --three Creating a virtualenv for this project… Using /home/franccesco/.pyenv/versions/3.6.4/bin/python3 to create virtualenv… ⠋Running virtualenv with interpreter /home/franccesco/.pyenv/versions/3.6.4/bin/python3 --SNIP -- Virtualenv location: /home/franccesco/.local/share/virtualenvs/test-PudGTmiz Creating a Pipfile for this project… Install Dependencies Let\u0026rsquo;s install requests as an example\npipenv install requests Installing requests… Collecting requests -- SNIP -- Adding requests to Pipfile's [packages]… PS: You have excellent taste! ✨ 🍰 ✨ Locking [dev-packages] dependencies… Locking [packages] dependencies… Updated Pipfile.lock (7b8df8)! Managing environment We can execute our python program without activating the virtual environment shell by running pipenv run python _pythonfile.py_ or accessing the virtual environment with pipenv shell\npipenv shell Spawning environment shell (/usr/bin/fish). Use 'exit' to leave. source /home/franccesco/.local/share/virtualenvs/test-PudGTmiz/bin/activate.fish -- SNIP -- ~$ Display installed packages Displays package information and versions.\npipenv graph requests==2.18.4 - certifi [required: \u0026gt;=2017.4.17, installed: 2018.1.18] - chardet [required: \u0026gt;=3.0.2,\u0026lt;3.1.0, installed: 3.0.4] - idna [required: \u0026lt;2.7,\u0026gt;=2.5, installed: 2.6] - urllib3 [required: \u0026lt;1.23,\u0026gt;=1.21.1, installed: 1.22] Freezing dependencies It generates a json file with our environment dependencies.\npipenv lock { \u0026#34;_meta\u0026#34;: { \u0026#34;hash\u0026#34;: { \u0026#34;sha256\u0026#34;: \u0026#34;33a0ec7c8e3bae6f62dd618f847de92ece20e2bd4efb496927e2524b9c7b8df8\u0026#34; }, \u0026#34;host-environment-markers\u0026#34;: { \u0026#34;implementation_name\u0026#34;: \u0026#34;cpython\u0026#34;, \u0026#34;implementation_version\u0026#34;: \u0026#34;3.6.4\u0026#34;, \u0026#34;os_name\u0026#34;: \u0026#34;posix\u0026#34;, \u0026#34;platform_machine\u0026#34;: \u0026#34;x86_64\u0026#34;, \u0026#34;platform_python_implementation\u0026#34;: \u0026#34;CPython\u0026#34;, \u0026#34;platform_release\u0026#34;: \u0026#34;4.13.0-32-generic\u0026#34;, \u0026#34;platform_system\u0026#34;: \u0026#34;Linux\u0026#34;, \u0026#34;platform_version\u0026#34;: \u0026#34;#35~16.04.1-Ubuntu SMP Thu Jan 25 10:13:43 UTC 2018\u0026#34;, \u0026#34;python_full_version\u0026#34;: \u0026#34;3.6.4\u0026#34;, \u0026#34;python_version\u0026#34;: \u0026#34;3.6\u0026#34;, \u0026#34;sys_platform\u0026#34;: \u0026#34;linux\u0026#34; }, \u0026#34;pipfile-spec\u0026#34;: 6, \u0026#34;requires\u0026#34;: { \u0026#34;python_version\u0026#34;: \u0026#34;3.6\u0026#34; }, \u0026#34;sources\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;pypi\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://pypi.python.org/simple\u0026#34;, \u0026#34;verify_ssl\u0026#34;: true } ] }, -- SNIP -- \u0026#34;develop\u0026#34;: {} } Delete a Pipenv virtual environment pipenv --rm Removing virtualenv (/home/franccesco/.local/share/virtualenvs/test-PudGTmiz)… You can find more documentation at Pipenv Guthub repo\n","permalink":"https://codingdose.info/posts/pipenv-development-workflow/","summary":"What is pipenv Essentially Pipenv is pip + virtualenv and it\u0026rsquo;s a match made in heaven. It manages dependencies, required python versions (if pyenv is available), generates pipfiles which is more reliable than a requirements.txt file and it generates a virtual environment so you don\u0026rsquo;t screw other environments and its requirements.\n It automatically creates and manages a virtualenv for your projects, as well as adds/removes packages from your Pipfile as you install/uninstall packages.","title":"How to Get Started With Pipenv"},{"content":"Dicionary Example: dictionary = {\u0026#39;one\u0026#39;: 1, \u0026#39;two\u0026#39;: 2, \u0026#39;three\u0026#39;: 3, \u0026#39;four\u0026#39;: 4, \u0026#39;five\u0026#39;: 5} Sorting methods Sort keys sorted(dictionary) \u0026gt;\u0026gt; [\u0026#39;five\u0026#39;, \u0026#39;four\u0026#39;, \u0026#39;one\u0026#39;, \u0026#39;three\u0026#39;, \u0026#39;two\u0026#39;] Sort keys by value sorted(dictionary, key=dictionary.__getitem__) \u0026gt;\u0026gt; [\u0026#39;one\u0026#39;, \u0026#39;two\u0026#39;, \u0026#39;three\u0026#39;, \u0026#39;four\u0026#39;, \u0026#39;five\u0026#39;] Sort values sorted(dictionary.values()) \u0026gt;\u0026gt; [1, 2, 3, 4, 5] Reverse sorting with reverse=True sorted(dictionary, key=dictionary.__getitem__, reverse=True) \u0026gt;\u0026gt; [\u0026#39;one\u0026#39;, \u0026#39;two\u0026#39;, \u0026#39;three\u0026#39;, \u0026#39;four\u0026#39;, \u0026#39;five\u0026#39;] ","permalink":"https://codingdose.info/posts/sort-a-dictionary-with-python/","summary":"Dicionary Example: dictionary = {\u0026#39;one\u0026#39;: 1, \u0026#39;two\u0026#39;: 2, \u0026#39;three\u0026#39;: 3, \u0026#39;four\u0026#39;: 4, \u0026#39;five\u0026#39;: 5} Sorting methods Sort keys sorted(dictionary) \u0026gt;\u0026gt; [\u0026#39;five\u0026#39;, \u0026#39;four\u0026#39;, \u0026#39;one\u0026#39;, \u0026#39;three\u0026#39;, \u0026#39;two\u0026#39;] Sort keys by value sorted(dictionary, key=dictionary.__getitem__) \u0026gt;\u0026gt; [\u0026#39;one\u0026#39;, \u0026#39;two\u0026#39;, \u0026#39;three\u0026#39;, \u0026#39;four\u0026#39;, \u0026#39;five\u0026#39;] Sort values sorted(dictionary.values()) \u0026gt;\u0026gt; [1, 2, 3, 4, 5] Reverse sorting with reverse=True sorted(dictionary, key=dictionary.__getitem__, reverse=True) \u0026gt;\u0026gt; [\u0026#39;one\u0026#39;, \u0026#39;two\u0026#39;, \u0026#39;three\u0026#39;, \u0026#39;four\u0026#39;, \u0026#39;five\u0026#39;] ","title":"Sort a Dictionary in Python"}]