Sample code for multiple-level treemap generation.This example also includes some methods on pandas data processing, such as: How to create a pandas dataframe? How to append several dataframe to construct a bigger dataframe? How to build a hierarchical dataframe? import pandas as pd import numpy as np import plotly.express as px import plotly.graph_objects as go ..
Tag : pandas
import pandas as pd from datetime import datetime fn=’s-p-tsx-60-futures_01.csv’ sp=pd.read_csv(fn) sp=sp.rename(columns={‘ value’:’value’}) sp[‘date’]=pd.to_datetime(sp.date) sp[‘Year’]=pd.DatetimeIndex(sp[‘date’]).year sp[‘Month’]=pd.DatetimeIndex(sp[‘date’]).month sp[‘dayofweek’]=sp[‘date’].dt.dayofweek sp[‘dayofmonth’]=pd.DatetimeIndex(sp[‘date’]).day sp[‘dayofyear’]=pd.DatetimeIndex(sp[‘date’]).dayofyear sp.tail(5) date value Year Month dayofweek dayofmonth dayofyear 5176 2020-04-23 857.4 2020 4 3 23 114 5177 2020-04-24 868.7 2020 4 4 24 115 5178 2020-04-27 879.7 2020 4 0 27 118 5179 2020-04-28 888.5 2020 4 ..
start_date=dataWorld.index[0] end_date=dataWorld.index[-1] dateData=pd.date_range(start=start_date,end=end_date) dataWorld[‘Date’]=dateData dataWorld=dataWorld.set_index(‘Date’) forecastDays=60 dateForecast= pd.date_range(start=end_date,periods=forecastDays+1)[1:] dateObsForecast=dateData.append(dat..
df2[‘date’] = df1[‘date’].values df2[‘hour’] = df1[‘hour’].values It’s better use inter join case.tail(3) caseCA caseON caseDaily CAcaseDailyON case_Date_province 2020-04-26 47864 15411 1598.0 498.0 2020-04-27 49499 15868 1635.0 457.0 2020-04-28 50982 16337 1483.0 469.0 test.tail(3) testCA testON testDailyCA testDailyON date_testing 2020-04-26 734824 229638 23570.0 12020.0 2020-04-27 765056 242188 30232.0 12550.0 2020-04-28 787612 253040 22556.0 10852.0 case_test = ..
By columns df.sort_values(by=[‘col1’]) df.sort_values(by=[‘col1’, ‘col2′]) df.sort_values(by=’col1′, ascending=False) df.sort_values(by=’col1′, ascending=False, na_position=’first’) By rows 0 col1 col2 col3 row1 222 16 23 row2 333 31 11 row3 444 34 11 df.sort_values(by=’row2′,axis=1) output: 0 col1 col2 col3 row1 23 16 222 row2 11 31 333 row3 11 34 444 df.sort_values(by=’row2′,axis=1,ascending=False) output: 0 col1 col2 col3 row1 222 16 ..
output = pd.DataFrame({‘date’ : [],’Forecast’:[],’Cases’: [],’Fitting’:[],’Increase’:[]}) output[‘date’]=dateObsForecast output[‘Forecast’]=y1*last output[‘Cases’].iloc[:dataLen]=data.values*last output[‘Fitting’].iloc[:dataLen]=final.values*last output[‘Increase’].iloc[1:]=(y1[1:]-y1[:-1])*last output=output.set_ind..
Examples for pivot_table of Pandas and crosstab of Pyspark from my work directory:pyWorkDir/Bigdata/Pyspark/DataForYuanPei.ipynb pivot_table casepandas=indcases.toPandas() casetable1=pd.pivot_table(casepandas, values=’VALUE’, index=[“Case identifier number”], columns=[“Case information”], aggfunc=np.sum) crosstab casetable=casedf.crosstab(‘case_Date’,’province’) casetable=casetable.toPandas() casetable=casetable.sort_values(‘case_Date_province’) cumsum_casetable=casetable.set_index(‘case_Date_province’).cumsum() cumsum_casetable[‘CA’]=cumsum_casetable.sum(axis=1) casedftable=casedf.crosstab(‘case_Date’,’health_region’) health_region_table=casedftable.select([‘case_Date_health_region’,’Toronto’,’Montréal’,’Vancouver Coastal’,..
rename columns name: df = df.rename(columns={“oldcol1″:”newcol1″,”oldcol2”: “newcol2”}) change value of a column under a condition: df_confirmed.loc[df_confirmed[‘country’] == “US”, “country”] = “USA” replace NaN with some value: df_confirmed = df_confirmed.replace(np.nan, ”, regex=True) drop several columns(Lat and Long): df = df.drop([‘Lat’,’Long’..