0% found this document useful (0 votes)
6 views13 pages

Scaffold FG

The document outlines a comprehensive analysis of travel time and land use data to assess job accessibility in various regions, including Sydney CBD, Parramatta, and Liverpool. It includes data processing steps such as calculating mean travel times, merging datasets, and applying a centralizing strategy for job distribution. The analysis also computes Wachs&Kumagai-style job accessibility metrics and person-weighted access for each region under both baseline and centralized scenarios.

Uploaded by

erninms3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views13 pages

Scaffold FG

The document outlines a comprehensive analysis of travel time and land use data to assess job accessibility in various regions, including Sydney CBD, Parramatta, and Liverpool. It includes data processing steps such as calculating mean travel times, merging datasets, and applying a centralizing strategy for job distribution. The analysis also computes Wachs&Kumagai-style job accessibility metrics and person-weighted access for each region under both baseline and centralized scenarios.

Uploaded by

erninms3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 13

#Use this workscpae to submit your assignment.

#Land use and travel time datasets are available in this workscape
import pandas as pd
import numpy as np
import geopandas as gpd
import os
import matplotlib.pyplot as plt

# Importing the travel time data from the original file.


traveltime = pd.read_csv('TravelTime.csv')
traveltime

## Creating a mean travel time matrix from the available data for all possible
origin-destination pairs:

# Calculating the mean travel time:


TT_mean1 = traveltime.groupby(['origin', 'destination'])
['travel_time'].mean().reset_index()
TT_mean1.rename(columns={'travel_time': 'Average Travel Time'}, inplace=True)

# Renaming the columns for uniformity:


TT_mean1.rename(columns={'origin':'Origin Code','destination':'Destination
Code'},inplace=True)

# Displaying the results:


TT_mean1

## Checking for the reliability of mean travel time estimates:

# Counting the all unique origin-destination pairs:


TT_count = traveltime.groupby(['origin', 'destination'])
['travel_time'].count().reset_index()
TT_count.rename(columns={'origin':'Origin Code','destination':'Destination
Code','travel_time': 'Records'},inplace=True)

# Merging the information together with mean travel time matrix to check for
reliability:
TT_check= pd.merge(TT_mean1, TT_count, on=['Origin Code','Destination Code'])

# Displaying the results:


TT_check

# Importing the land use data from the original file.


landuse = pd.read_csv('LandUse.csv')

# Renaming the columns for uniformity:


landuse.rename(columns={'population':'Population','jobs':'Jobs'},inplace=True)

# Displaying the results:


landuse

# Importing the centre distribution data from the original file.


centre_df = pd.read_csv('CentrePeripheries.csv')
centre_df

# Defining the list of codes corresponding to the centres:


Centrecodesdf = ['117031337','125041492','127031598']

## Creating origin-destination pairs for the three centres:


## Creating the origin-destination pairs for Sydney CBD:

# Filtering out Sydney CBD data:


Syddf = centre_df[centre_df['Centre SA2_CODE'] == 117031337]

# Adding a new row data for the intrazone travel:


new_row = pd.DataFrame({'Centre SA2_CODE': [117031337],'Centre SA2_NAME': ['Sydney
- Haymarket - The Rocks'],'Periphery SA2_CODE': [117031337],'Periphery SA2_NAME':
['Sydney - Haymarket - The Rocks']})
Syddf = pd.concat([Syddf, new_row], ignore_index=True)

# Renaming the columns for uniformity:


Syddf.rename(columns={'Centre SA2_CODE':'Centre Code','Centre SA2_NAME':'Centre
Name','Periphery SA2_CODE':'Periphery Code','Periphery SA2_NAME':'Periphery
Name'},inplace=True)

# Displaying the updated dataframe:


Syddf

## Creating the origin-destination pairs for Paramatta:

# Filtering out Paramatta data:


Pardf = centre_df[centre_df['Centre SA2_CODE'] == 125041492]

# Adding a new row data for the intrazone travel:


new_row = pd.DataFrame({'Centre SA2_CODE': [125041492],'Centre SA2_NAME':
['Parramatta - Rosehill'],'Periphery SA2_CODE': [125041492],'Periphery SA2_NAME':
['Parramatta - Rosehill']})
Pardf = pd.concat([Pardf, new_row], ignore_index=True)

# Renaming the columns for uniformity:


Pardf.rename(columns={'Centre SA2_CODE':'Centre Code','Centre SA2_NAME':'Centre
Name','Periphery SA2_CODE':'Periphery Code','Periphery SA2_NAME':'Periphery
Name'},inplace=True)

# Displaying the updated dataframe:


Pardf

## Creating the origin-destination pairs for Liverpool:

# Filtering out Liverpool data:


Livdf = centre_df[centre_df['Centre SA2_CODE'] == 127031598]

# Adding a new row data for the intrazone travel:


new_row = pd.DataFrame({'Centre SA2_CODE': [127031598],'Centre SA2_NAME':
['Liverpool'],'Periphery SA2_CODE': [127031598],'Periphery SA2_NAME':
['Liverpool']})
Livdf = pd.concat([Livdf, new_row], ignore_index=True)

# Renaming the columns for uniformity:


Livdf.rename(columns={'Centre SA2_CODE':'Centre Code','Centre SA2_NAME':'Centre
Name','Periphery SA2_CODE':'Periphery Code','Periphery SA2_NAME':'Periphery
Name'},inplace=True)

# Displaying the updated dataframe:


Livdf

# Combing the travel time and land use data for baseline scenario:
TL_ij = pd.merge(TT_mean1,landuse,left_on='Destination
Code',right_on='SA2_CODE',how='left')
TL_ij

## Getting the job accessibility for Sydney CBD in the baseline scenario:

# Creating a list of peripheries for the centre Sydney CBD:


Sydlist = Syddf['Periphery Code'].tolist()

# Filtering the land use and transport data relevant for Sydney CBD scenario:
SydTL1_ij = TL_ij[(TL_ij['Origin Code'].isin(Sydlist)) & (TL_ij['Destination
Code'].isin(Sydlist))].reset_index(drop=True)

# Removing the 'SA2_CODE' additional column from the output:


SydTL1_ij = SydTL1_ij.drop(columns=['SA2_CODE'])

# Displaying the results:


SydTL1_ij

## Computing Wachs&Kumagai-style job accessibility for Sydney CBD in baseline


scenario:

# Getting the Wachs&Kumagai-style job accessibility with a threshold:


threshold=30*60
SydTL1_ij['Include'] = SydTL1_ij['Average Travel Time']<=threshold
SydTL1_ij['Wachs'] = SydTL1_ij['Include']*SydTL1_ij['Jobs']
SydTL1_ij

# Getting the job accessibilities grouped across different origins:


SydJA1 = SydTL1_ij.groupby('Origin Code')[['Wachs']].sum().reset_index()

# Adding the corresponding population values for the origins:


SydJA1 = SydJA1.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
SydJA1 = SydJA1.drop(columns=['SA2_CODE'])

# Displaying the result:


SydJA1

## Getting the person-weighted access for Sydney CBD in baseline scenario:

# Calculating the total population for Sydney CBD in baseline scenario:


Syd1_P = SydJA1['Population'].sum()
Syd1_P

# Calculating the weights for each job accessibility for Sydney CBD in baseline
scenario:
SydJA1['Weighted Access'] = SydJA1['Population']*SydJA1['Wachs']/Syd1_P
SydJA1

# Summing the weighted accessibilities:


Syd1_PWA = SydJA1['Weighted Access'].sum()
Syd1_PWA

## Getting the job accessibility for Parramatta in the baseline scenario:

# Creating a list of peripheries for the centre Parramatta:


Parlist = Pardf['Periphery Code'].tolist()
# Filtering the land use and transport data relevant for Parramatta scenario:
ParTL1_ij = TL_ij[(TL_ij['Origin Code'].isin(Parlist)) & (TL_ij['Destination
Code'].isin(Parlist))].reset_index(drop=True)

# Removing the 'SA2_CODE' additional column from the output:


ParTL1_ij = ParTL1_ij.drop(columns=['SA2_CODE'])

# Displaying the results:


ParTL1_ij

## Computing Wachs&Kumagai-style accessibility for Parramatta in baseline scenario:

# Getting the Wachs&Kumagai-style accessibility with a threshold:


threshold=30*60
ParTL1_ij['Include'] = ParTL1_ij['Average Travel Time']<=threshold
ParTL1_ij['Wachs'] = ParTL1_ij['Include']*ParTL1_ij['Jobs']
ParTL1_ij

# Getting the job accessibilities grouped across different origins:


ParJA1 = ParTL1_ij.groupby('Origin Code')[['Wachs']].sum().reset_index()

# Adding the corresponding population values for the origins:


ParJA1 = ParJA1.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
ParJA1 = ParJA1.drop(columns=['SA2_CODE'])

# Displaying the result:


ParJA1

## Getting the person-weighted access for Parramatta in baseline scenario:

# Calculating the total population for Parramatta in baseline scenario:


Par1_P = ParJA1['Population'].sum()
Par1_P

# Calculating the weights for each job accessibility for Parramatta in baseline
scenario:
ParJA1['Weighted Access'] = ParJA1['Population']*ParJA1['Wachs']/Par1_P
ParJA1

# Summing the weighted accessibilities:


Par1_PWA = ParJA1['Weighted Access'].sum()
Par1_PWA

## Getting the job accessibility for Liverpool in the baseline scenario:

# Creating a list of peripheries for the centre Liverpool:


Livlist = Livdf['Periphery Code'].tolist()

# Filtering the land use and transport data relevant for Liverpool scenario:
LivTL1_ij = TL_ij[(TL_ij['Origin Code'].isin(Livlist)) & (TL_ij['Destination
Code'].isin(Livlist))].reset_index(drop=True)

# Removing the 'SA2_CODE' additional column from the output:


LivTL1_ij = LivTL1_ij.drop(columns=['SA2_CODE'])

# Displaying the results:


LivTL1_ij
## Computing Wachs&Kumagai-style accessibility for Liverpool in baseline scenario:

# Getting the Wachs&Kumagai-style accessibility with a threshold:


threshold=30*60
LivTL1_ij['Include'] = LivTL1_ij['Average Travel Time']<=threshold
LivTL1_ij['Wachs'] = LivTL1_ij['Include']*LivTL1_ij['Jobs']
LivTL1_ij

# Getting the job accessibilities grouped across different origins:


LivJA1 = LivTL1_ij.groupby('Origin Code')[['Wachs']].sum().reset_index()

# Adding the corresponding population values for the origins:


LivJA1 = LivJA1.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
LivJA1 = LivJA1.drop(columns=['SA2_CODE'])

# Displaying the result:


LivJA1

## Getting the person-weighted access for Liverpool in baseline scenario:

# Calculating the total population for Liverpool in baseline scenario:


Liv1_P = LivJA1['Population'].sum()
Liv1_P

# Calculating the weights for each job accessibility for Liverpool in baseline
scenario:
LivJA1['Weighted Access'] = LivJA1['Population']*LivJA1['Wachs']/Liv1_P
LivJA1

# Summing the weighted accessibilities:


Liv1_PWA = LivJA1['Weighted Access'].sum()
Liv1_PWA

## Defining a function to shift jobs based on centralising strategy:


def shift_jobs(df, shift_amounts, centralise=True, is_percentage=False,
quotas=False, centre_code=None):

# Verifying the feasibility of input data:


if quotas:
if len(shift_amounts) != df.shape[0]:
raise ValueError("The length of shift amounts must match the number of
rows in the dataframe.")
else:
if len(shift_amounts) != 1:
raise ValueError("Shift amounts must contain a single value when quotas
is False.")

# Defining the direction of the shift:


direction = 1 if centralise else -1

# Creating a copy of the dataframe to avoid modifying the original one:


df_copy = df.copy()

# Identifying the central region based on cendf list:


if centre_code is not None:
central_index = df_copy[df_copy['Periphery Code'].isin(centre_code)].index
if central_index.empty:
raise ValueError("No matching center code found in the dataframe.")
else:
raise ValueError("Centre code list must be provided.")

# Applying job shifts where the central region will be skipped in the shift:
for idx in range(len(df_copy)):
if idx == central_index:
continue

current_jobs = df_copy.loc[idx, 'Jobs']


shift_amount = shift_amounts[idx] if quotas else shift_amounts[0]

# If shift is in percentage:
if is_percentage:
shift_amount = (shift_amount / 100) * current_jobs

# Ensuring the shift amount does not exceed the number of jobs:
shift_amount = min(shift_amount, current_jobs)

# Reducing jobs in peripheral regions:


df_copy.loc[idx, 'Jobs'] -= shift_amount

# Adding jobs to the central region:


df_copy.loc[central_index, 'Jobs'] += shift_amount

# Returning the updated transport and land use data table:


return df_copy

## Getting the land use data for Sydney CBD in the centralised scenario:

# Getting the current job and population distribution for Sydney CBD:
SydJPdf = Syddf.merge(landuse[['SA2_CODE', 'Population', 'Jobs']],
left_on='Periphery Code', right_on='SA2_CODE', how='left')
SydJPdf = SydJPdf.drop(columns=['Centre Code','Centre Name','SA2_CODE'])

SydJPdf

# Defining job shift amounts for peripheries:


shift_amounts = [500, 1000, 2000, 3000, 4000, 0]

# Defining the centre code for Sydney CBD:


centre_code = [117031337]

# Shifting jobs with centralising strategy using absolute amounts:


Sydcendf = shift_jobs(SydJPdf, shift_amounts, centralise=True, is_percentage=False,
quotas=True, centre_code=centre_code)

# Renaming the columns for uniformity:


Sydcendf.rename(columns={'Jobs':'New Jobs Cen'},inplace=True)

# Displaying the results:


Sydcendf

## Getting the travel time data for Sydney CBD in the centralised scenario:

# Merging the travel time data with the new job distribution
SydTL2_ij = SydTL1_ij.merge(Sydcendf[['Periphery Code', 'New Jobs Cen']],
left_on='Destination Code', right_on='Periphery Code', how='left')

# Removing the old columns from the output:


SydTL2_ij = SydTL2_ij.drop(columns=['Periphery Code','Jobs'])

# Displaying the results:


SydTL2_ij

## Computing Wachs&Kumagai-style job accessibility for Sydney CBD in centralised


scenario:

# Getting the Wachs&Kumagai-style job accessibility with a threshold:


threshold=30*60
SydTL2_ij['Include Cen'] = SydTL2_ij['Average Travel Time']<=threshold
SydTL2_ij['Wachs Cen'] = SydTL2_ij['Include Cen']*SydTL2_ij['New Jobs Cen']

# Removing the old columns from the output:


SydTL2_ij = SydTL2_ij.drop(columns=['Include','Wachs'])

# Displaying the output:


SydTL2_ij

# Getting the job accessibilities grouped across different origins:


SydJA2 = SydTL2_ij.groupby('Origin Code')[['Wachs Cen']].sum().reset_index()

# Adding the corresponding population values for the origins:


SydJA2 = SydJA2.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
SydJA2 = SydJA2.drop(columns=['SA2_CODE'])

# Displaying the result:


SydJA2

# Calculating the weights for each job accessibility for Sydney CBD in centralised
scenario:
SydJA2['Weighted Access'] = SydJA2['Population']*SydJA2['Wachs Cen']/Syd1_P
SydJA2

# Summing the weighted accessibilities:


Syd2_PWA = SydJA2['Weighted Access'].sum()
Syd2_PWA

## Getting the land use data for Parramatta in the centralised scenario:

# Getting the current job and population distribution for Parramatta:


ParJPdf = Pardf.merge(landuse[['SA2_CODE', 'Population', 'Jobs']],
left_on='Periphery Code', right_on='SA2_CODE', how='left')
ParJPdf = ParJPdf.drop(columns=['Centre Code','Centre Name','SA2_CODE'])

ParJPdf

# Defining job shift amounts in percentages for peripheries:


shift_amounts = [25, 25, 25, 25, 25, 0]

# Defining the centre code for Parramatta:


centre_code = [125041492]

# Shifting jobs with centralising strategy using percentage amounts:


Parcendf = shift_jobs(ParJPdf, shift_amounts, centralise=True, is_percentage=True,
quotas=True, centre_code=centre_code)

# Renaming the columns for uniformity:


Parcendf.rename(columns={'Jobs':'New Jobs Cen'},inplace=True)

# Displaying the results:


Parcendf

## Getting the travel time data for Parramatta in the centralised scenario:

# Merging the travel time data with the new job distribution
ParTL2_ij = ParTL1_ij.merge(Parcendf[['Periphery Code', 'New Jobs Cen']],
left_on='Destination Code', right_on='Periphery Code', how='left')

# Removing the old columns from the output:


ParTL2_ij = ParTL2_ij.drop(columns=['Periphery Code','Jobs'])

# Displaying the results:


ParTL2_ij

## Computing Wachs&Kumagai-style job accessibility for Parramatta in centralised


scenario:

# Getting the Wachs&Kumagai-style job accessibility with a threshold:


threshold=30*60
ParTL2_ij['Include Cen'] = ParTL2_ij['Average Travel Time']<=threshold
ParTL2_ij['Wachs Cen'] = ParTL2_ij['Include Cen']*ParTL2_ij['New Jobs Cen']

# Removing the old columns from the output:


ParTL2_ij = ParTL2_ij.drop(columns=['Include','Wachs'])

# Displaying the output:


ParTL2_ij

# Getting the job accessibilities grouped across different origins:


ParJA2 = ParTL2_ij.groupby('Origin Code')[['Wachs Cen']].sum().reset_index()

# Adding the corresponding population values for the origins:


ParJA2 = ParJA2.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
ParJA2 = ParJA2.drop(columns=['SA2_CODE'])

# Displaying the result:


ParJA2

# Calculating the weights for each job accessibility for Parramatta in centralised
scenario:
ParJA2['Weighted Access'] = ParJA2['Population']*ParJA2['Wachs Cen']/Par1_P
ParJA2

# Summing the weighted accessibilities:


Par2_PWA = ParJA2['Weighted Access'].sum()
Par2_PWA

## Getting the land use data for Liverpool in the centralised scenario:

# Getting the current job and population distribution for Liverpool:


LivJPdf = Livdf.merge(landuse[['SA2_CODE', 'Population', 'Jobs']],
left_on='Periphery Code', right_on='SA2_CODE', how='left')
LivJPdf = LivJPdf.drop(columns=['Centre Code','Centre Name','SA2_CODE'])

LivJPdf
# Defining job shift amounts in absolute amount for peripheries:
shift_amounts = [400, 9000, 3000, 100, 0, 0]

# Defining the centre code for Liverpool:


centre_code = [127031598]

# Shifting jobs with centralising strategy using percentage amounts:


Livcendf = shift_jobs(LivJPdf, shift_amounts, centralise=True, is_percentage=False,
quotas=True, centre_code=centre_code)

# Renaming the columns for uniformity:


Livcendf.rename(columns={'Jobs':'New Jobs Cen'},inplace=True)

# Displaying the results:


Livcendf

## Getting the travel time data for Liverpool in the centralised scenario:

# Merging the travel time data with the new job distribution
LivTL2_ij = LivTL1_ij.merge(Livcendf[['Periphery Code', 'New Jobs Cen']],
left_on='Destination Code', right_on='Periphery Code', how='left')

# Removing the old columns from the output:


LivTL2_ij = LivTL2_ij.drop(columns=['Periphery Code','Jobs'])

# Displaying the results:


LivTL2_ij

## Computing Wachs&Kumagai-style job accessibility for Liverpool in centralised


scenario:

# Getting the Wachs&Kumagai-style job accessibility with a threshold:


threshold=30*60
LivTL2_ij['Include Cen'] = LivTL2_ij['Average Travel Time']<=threshold
LivTL2_ij['Wachs Cen'] = LivTL2_ij['Include Cen']*LivTL2_ij['New Jobs Cen']

# Removing the old columns from the output:


LivTL2_ij = LivTL2_ij.drop(columns=['Include','Wachs'])

# Displaying the output:


LivTL2_ij

# Getting the job accessibilities grouped across different origins:


LivJA2 = LivTL2_ij.groupby('Origin Code')[['Wachs Cen']].sum().reset_index()

# Adding the corresponding population values for the origins:


LivJA2 = LivJA2.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
LivJA2 = LivJA2.drop(columns=['SA2_CODE'])

# Displaying the result:


LivJA2

# Calculating the weights for each job accessibility for Liverpool in centralised
scenario:
LivJA2['Weighted Access'] = LivJA2['Population']*LivJA2['Wachs Cen']/Liv1_P
LivJA2
# Summing the weighted accessibilities:
Liv2_PWA = LivJA2['Weighted Access'].sum()
Liv2_PWA

# Recalling the job and population distribution:


SydJPdf

# Defining job shift amounts in absolute amount for peripheries:


shift_amounts = [-4000, -4000, -4000, -4000, -4000, 0]

# Defining the centre code for Liverpool:


centre_code = [117031337]

# Shifting jobs with centralising strategy using percentage amounts:


Syddendf = shift_jobs(SydJPdf, shift_amounts, centralise=False,
is_percentage=False, quotas=True, centre_code=centre_code)

# Renaming the columns for uniformity:


Syddendf.rename(columns={'Jobs':'New Jobs DCen'},inplace=True)

# Displaying the results:


Syddendf

## Getting the travel time data for Sydney CBD in the decentralised scenario:

# Merging the travel time data with the new job distribution
SydTL3_ij = SydTL1_ij.merge(Syddendf[['Periphery Code', 'New Jobs DCen']],
left_on='Destination Code', right_on='Periphery Code', how='left')

# Removing the old columns from the output:


SydTL3_ij = SydTL3_ij.drop(columns=['Periphery Code','Jobs'])

# Displaying the results:


SydTL3_ij

## Computing Wachs&Kumagai-style job accessibility for Sydney CBD in decentralised


scenario:

# Getting the Wachs&Kumagai-style job accessibility with a threshold:


threshold=30*60
SydTL3_ij['Include DCen'] = SydTL3_ij['Average Travel Time']<=threshold
SydTL3_ij['Wachs DCen'] = SydTL3_ij['Include DCen']*SydTL3_ij['New Jobs DCen']

# Removing the old columns from the output:


SydTL3_ij = SydTL3_ij.drop(columns=['Include','Wachs'])

# Displaying the output:


SydTL3_ij

# Getting the job accessibilities grouped across different origins:


SydJA3 = SydTL3_ij.groupby('Origin Code')[['Wachs DCen']].sum().reset_index()

# Adding the corresponding population values for the origins:


SydJA3 = SydJA3.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
SydJA3 = SydJA3.drop(columns=['SA2_CODE'])

# Displaying the result:


SydJA3
# Calculating the weights for each job accessibility for Sydney CBD in
decentralised scenario:
SydJA3['Weighted Access'] = SydJA3['Population']*SydJA3['Wachs DCen']/Syd1_P
SydJA3

# Summing the weighted accessibilities:


Syd3_PWA = SydJA3['Weighted Access'].sum()
Syd3_PWA

# Recalling the job and population distribution:


ParJPdf

# Defining job shift amounts in percentages for peripheries:


shift_amounts = [45, 15, 39, 32, 39, 0]

# Defining the centre code for Liverpool:


centre_code = [125041492]

# Shifting jobs with centralising strategy using percentage amounts:


Pardendf = shift_jobs(ParJPdf, shift_amounts, centralise=False, is_percentage=True,
quotas=True, centre_code=centre_code)

# Renaming the columns for uniformity:


Pardendf.rename(columns={'Jobs':'New Jobs DCen'},inplace=True)

# Displaying the results:


Pardendf

## Getting the travel time data for Parramatta in the decentralised scenario:

# Merging the travel time data with the new job distribution
ParTL3_ij = ParTL1_ij.merge(Pardendf[['Periphery Code', 'New Jobs DCen']],
left_on='Destination Code', right_on='Periphery Code', how='left')

# Removing the old columns from the output:


ParTL3_ij = ParTL3_ij.drop(columns=['Periphery Code','Jobs'])

# Displaying the results:


ParTL3_ij

## Computing Wachs&Kumagai-style job accessibility for Parramatta in decentralised


scenario:

# Getting the Wachs&Kumagai-style job accessibility with a threshold:


threshold=30*60
ParTL3_ij['Include DCen'] = ParTL3_ij['Average Travel Time']<=threshold
ParTL3_ij['Wachs DCen'] = ParTL3_ij['Include DCen']*ParTL3_ij['New Jobs DCen']

# Removing the old columns from the output:


ParTL3_ij = ParTL3_ij.drop(columns=['Include','Wachs'])

# Displaying the output:


ParTL3_ij

# Getting the job accessibilities grouped across different origins:


ParJA3 = ParTL3_ij.groupby('Origin Code')[['Wachs DCen']].sum().reset_index()

# Adding the corresponding population values for the origins:


ParJA3 = ParJA3.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
ParJA3 = ParJA3.drop(columns=['SA2_CODE'])

# Displaying the result:


ParJA3

# Calculating the weights for each job accessibility for Parramatta in


decentralised scenario:
ParJA3['Weighted Access'] = ParJA3['Population']*ParJA3['Wachs DCen']/Par1_P
ParJA3

# Summing the weighted accessibilities:


Par3_PWA = ParJA3['Weighted Access'].sum()
Par3_PWA

# Recalling the job and population distribution:


LivJPdf

# Defining job shift amounts in absolute amount for peripheries:


shift_amounts = [-400, -70, -440, -100, -40, 0]

# Defining the centre code for Liverpool:


centre_code = [127031598]

# Shifting jobs with centralising strategy using percentage amounts:


Livdendf = shift_jobs(LivJPdf, shift_amounts, centralise=False,
is_percentage=False, quotas=True, centre_code=centre_code)

# Renaming the columns for uniformity:


Livdendf.rename(columns={'Jobs':'New Jobs DCen'},inplace=True)

# Displaying the results:


Livdendf

## Getting the travel time data for Liverpool in the decentralised scenario:

# Merging the travel time data with the new job distribution
LivTL3_ij = LivTL1_ij.merge(Livdendf[['Periphery Code', 'New Jobs DCen']],
left_on='Destination Code', right_on='Periphery Code', how='left')

# Removing the old columns from the output:


LivTL3_ij = LivTL3_ij.drop(columns=['Periphery Code','Jobs'])

# Displaying the results:


LivTL3_ij

## Computing Wachs&Kumagai-style job accessibility for Liverpool in decentralised


scenario:

# Getting the Wachs&Kumagai-style job accessibility with a threshold:


threshold=30*60
LivTL3_ij['Include DCen'] = LivTL3_ij['Average Travel Time']<=threshold
LivTL3_ij['Wachs DCen'] = LivTL3_ij['Include DCen']*LivTL3_ij['New Jobs DCen']

# Removing the old columns from the output:


LivTL3_ij = LivTL3_ij.drop(columns=['Include','Wachs'])

# Displaying the output:


LivTL3_ij

# Getting the job accessibilities grouped across different origins:


LivJA3 = LivTL3_ij.groupby('Origin Code')[['Wachs DCen']].sum().reset_index()

# Adding the corresponding population values for the origins:


LivJA3 = LivJA3.merge(landuse[['SA2_CODE', 'Population']], left_on='Origin Code',
right_on='SA2_CODE', how='left')
LivJA3 = LivJA3.drop(columns=['SA2_CODE'])

# Displaying the result:


LivJA3

# Calculating the weights for each job accessibility for Liverpool in decentralised
scenario:
LivJA3['Weighted Access'] = LivJA3['Population']*LivJA3['Wachs DCen']/Liv1_P
LivJA3

# Summing the weighted accessibilities:


Liv3_PWA = LivJA3['Weighted Access'].sum()
Liv3_PWA

You might also like