This program prepares the selected data from 2008, Texas 8th Grade Cohort Longitudinal Study for mapping

Begin by downloading the Cohort Workbook from the THECB website.

The selected data focus on enrollment and completion rates in higher education. In addittion to examining the overall cohort, the data also describe the target populations from the Texas Higher Education Strategic Plan - African American, Hispanic, economically disadvantaged, and male students. These are groups that have had historically lower rates of participation and success in higher education. Data is reported by TEA Region.

Save the workbook in 'Data\8th Grade FY2008 Cohort Workbook.xlsx'

In [1]:
import pandas as pd
import requests
import zipfile
import arcpy
import io
import os

arcpy.env.overwriteOutput = True
pd.options.display.max_rows = 10

#Make Geodatabase if it doesn't exist
if not os.path.exists("Data/Cohort.gbd"):
    arcpy.CreateFileGDB_management("Data","Cohort.gdb")

Extract the gender by ethnicity data

In [2]:
xl = pd.read_excel('Data\8th Grade FY2008 Cohort Workbook.xlsx', sheet_name='TEA by Gender by Ethnicity', header=None, index_col=None, skiprows=6)

#Keep the columns you need
xl2=xl[[0,1,2,3,4,17,18,21,22]]

#Drop the rows you don't need and then name the columns
GenEth=xl2[:160]
GenEth.columns=['TEAReg','RegName','Gender','Eth', 'CohoN', 'nEnr', 'pEnr', 'nComp', 'pComp']

print(GenEth)  #Check results to be sure you grabbed all the data
    TEAReg      RegName  Gender               Eth    CohoN    nEnr    pEnr  \
0        1     Edinburg  Female  African American     29.0    19.0  0.6552   
1        1     Edinburg  Female          Hispanic  12703.0  7537.0  0.5933   
2        1     Edinburg  Female             White    319.0   213.0  0.6677   
3        1     Edinburg  Female            Others     69.0    37.0  0.5362   
4        1     Edinburg    Male  African American     36.0    23.0  0.6389   
..     ...          ...     ...               ...      ...     ...     ...   
155     20  San Antonio  Female            Others    280.0   188.0  0.6714   
156     20  San Antonio    Male  African American   1102.0   528.0  0.4791   
157     20  San Antonio    Male          Hispanic   9226.0  3979.0  0.4313   
158     20  San Antonio    Male             White   3538.0  1994.0  0.5636   
159     20  San Antonio    Male            Others    264.0   172.0  0.6515   

      nComp   pComp  
0      11.0  0.3793  
1    3341.0  0.2630  
2     121.0  0.3793  
3      25.0  0.3623  
4       9.0  0.2500  
..      ...     ...  
155   123.0  0.4393  
156   130.0  0.1180  
157  1301.0  0.1410  
158  1055.0  0.2982  
159    93.0  0.3523  

[160 rows x 9 columns]

Collapse on ethicity to remove gender to get African American and Hispanic totals by region.

In [3]:
#Keep Hispanic and African American counts, collapse to remove gender, and then calculate percents 
EthCounts=GenEth.drop(GenEth.columns[[2,6,8]], axis=1) #axis=0 for rows, axis=1 for columns

#Make African American Group
AAtemp=EthCounts.loc[EthCounts['Eth']=='African American'].copy() #copy to avoid chained indexing
AA=AAtemp.groupby(["TEAReg", "RegName","Eth"], as_index=False).sum()
AA['AApEnr']=100*AA['nEnr']/AA['CohoN']
AA['AApComp']=100*AA['nComp']/AA['CohoN']
AA=AA.drop(['Eth'], axis=1) 
AA.columns=['TEAReg','RegName','AACoho', 'AAnEnr','AAnComp','AApEnr','AApComp']

#Make Hispanic Group
Hisptemp=EthCounts.loc[EthCounts['Eth']=='Hispanic'].copy() #copy to avoid chained indexing
Hisp=Hisptemp.groupby(["TEAReg", "RegName","Eth"], as_index=False).sum()
Hisp['HispEnr']=100*Hisp['nEnr']/Hisp['CohoN']
Hisp['HispComp']=100*Hisp['nComp']/Hisp['CohoN']
Hisp=Hisp.drop(['Eth'], axis=1)
Hisp.columns=['TEAReg','RegName','HisCoho', 'HisnEnr','HisnComp','HispEnr','HispComp']

print(AA) #Check results
print(Hisp)
    TEAReg         RegName   AACoho  AAnEnr  AAnComp     AApEnr    AApComp
0        1        Edinburg     65.0    42.0     20.0  64.615385  30.769231
1        2  Corpus Christi    291.0   146.0     37.0  50.171821  12.714777
2        3        Victoria    396.0   210.0     45.0  53.030303  11.363636
3        4         Houston  16341.0  9148.0   2491.0  55.981886  15.243865
4        5        Beaumont   1777.0   951.0    250.0  53.517164  14.068655
..     ...             ...      ...     ...      ...        ...        ...
15      16        Amarillo    323.0   173.0     35.0  53.560372  10.835913
16      17         Lubbock    433.0   201.0     47.0  46.420323  10.854503
17      18         Midland    244.0   108.0     35.0  44.262295  14.344262
18      19         El Paso    384.0   166.0     48.0  43.229167  12.500000
19      20     San Antonio   2110.0  1089.0    371.0  51.611374  17.582938

[20 rows x 7 columns]
    TEAReg         RegName  HisCoho  HisnEnr  HisnComp    HispEnr   HispComp
0        1        Edinburg  25765.0  14238.0    5617.0  55.261013  21.800893
1        2  Corpus Christi   5124.0   2501.0     840.0  48.809524  16.393443
2        3        Victoria   1810.0    748.0     240.0  41.325967  13.259669
3        4         Houston  30602.0  13570.0    4935.0  44.343507  16.126397
4        5        Beaumont    660.0    265.0     136.0  40.151515  20.606061
..     ...             ...      ...      ...       ...        ...        ...
15      16        Amarillo   2177.0    964.0     334.0  44.281121  15.342214
16      17         Lubbock   2691.0   1132.0     355.0  42.066146  13.192122
17      18         Midland   3142.0   1389.0     466.0  44.207511  14.831318
18      19         El Paso  11246.0   6737.0    2193.0  59.905744  19.500267
19      20     San Antonio  17903.0   8731.0    3266.0  48.768363  18.242753

[20 rows x 7 columns]

Get data on male students by TEA region

In [4]:
#Get counts of male students by region, collape on gender.
GenCounts=GenEth.drop(GenEth.columns[[3,6,8]], axis=1) #axis=0 for rows, axis=1 for columns
Allmalestemp=GenCounts.loc[GenCounts['Gender']=='Male'].copy() #copy to avoid chained indexing
Allmales=Allmalestemp.groupby(["TEAReg", "RegName"], as_index=False).sum().copy()
Allmales['AllmpEnr']=100*Allmales['nEnr']/Allmales['CohoN']
Allmales['AllmpComp']=100*Allmales['nComp']/Allmales['CohoN']
Allmales.columns=['TEAReg', 'RegName','TotmCoho', 'TotmnEnr','TotmnComp','TotmpEnr','TotmpComp']


print(Allmales)
    TEAReg         RegName  TotmCoho  TotmnEnr  TotmnComp   TotmpEnr  \
0        1        Edinburg   13548.0    7016.0     2428.0  51.786242   
1        2  Corpus Christi    3921.0    1842.0      648.0  46.977812   
2        3        Victoria    2013.0     953.0      419.0  47.342275   
3        4         Houston   37913.0   18928.0     7201.0  49.924828   
4        5        Beaumont    3006.0    1411.0      539.0  46.939454   
..     ...             ...       ...       ...        ...        ...   
15      16        Amarillo    2987.0    1459.0      561.0  48.844995   
16      17         Lubbock    2771.0    1290.0      514.0  46.553591   
17      18         Midland    2733.0    1125.0      385.0  41.163557   
18      19         El Paso    6484.0    3574.0     1032.0  55.120296   
19      20     San Antonio   14130.0    6673.0     2579.0  47.225761   

    TotmpComp  
0   17.921464  
1   16.526396  
2   20.814704  
3   18.993485  
4   17.930805  
..        ...  
15  18.781386  
16  18.549260  
17  14.087084  
18  15.916101  
19  18.251946  

[20 rows x 7 columns]

Get data on economically disadvantaged students by TEA region

In [5]:
xlEcon = pd.read_excel('Data\8th Grade FY2008 Cohort Workbook.xlsx', sheet_name='TEA Region by Eco', header=None, index_col=None, skiprows=6)

#Keep just the columns you need
xlEcon2=xlEcon[[0,1,2,3,16,17,20,21]]
EconTemp=xlEcon2.loc[xlEcon2[2]=='Economically Disadvantaged'].copy()

EconTemp2=EconTemp.drop([2], axis=1).copy()

#Get Region Totals and drop the rows you don't need
Econ=EconTemp2[:20].copy()
Econ.columns=['TEAReg','RegName','EcoCoho', 'EconEnr', 'EcopEnr', 'EconComp', 'EcopComp']

Econ['EcopEnr']=100*Econ['EcopEnr']
Econ['EcopComp']=100*Econ['EcopComp']
Econ['TEAReg']=Econ['TEAReg'].astype(int) #Make sure region in an integer for merging later

print(Econ)
    TEAReg         RegName  EcoCoho  EconEnr  EcopEnr  EconComp  EcopComp
1        1        Edinburg  22598.0  11792.0    52.18    4348.0     19.24
3        2  Corpus Christi   4284.0   1769.0    41.29     472.0     11.02
5        3        Victoria   1954.0    739.0    37.82     205.0     10.49
7        4         Houston  37004.0  16213.0    43.81    4972.0     13.44
9        5        Beaumont   2952.0   1213.0    41.09     341.0     11.55
..     ...             ...      ...      ...      ...       ...       ...
31      16        Amarillo   2928.0   1264.0    43.17     389.0     13.29
33      17         Lubbock   2987.0   1163.0    38.94     322.0     10.78
35      18         Midland   2481.0    918.0    37.00     266.0     10.72
37      19         El Paso   9495.0   5299.0    55.81    1583.0     16.67
39      20     San Antonio  15791.0   6759.0    42.80    2224.0     14.08

[20 rows x 7 columns]

Get overall totals by region

In [6]:
xl = pd.read_excel('Data\8th Grade FY2008 Cohort Workbook.xlsx', sheet_name='Summary', header=None, index_col=None, skiprows=16)

#Keep the columns you need
xl2=xl[[0,1,2,15,16,19,20]]

#Get Region Totals and drop the rows you don't need
RegTotals=xl2[:20].copy()
RegTotals.columns=['TEAReg','RegName','TotCoho', 'TotnEnr', 'TotpEnr', 'TotnComp', 'TotpComp']

RegTotals['TotpEnr']=100*RegTotals['TotpEnr']
RegTotals['TotpComp']=100*RegTotals['TotpComp']
RegTotals['TEAReg']=RegTotals['TEAReg'].astype(int) #Make sure region in an integer for merging later

print(RegTotals)
    TEAReg         RegName  TotCoho  TotnEnr    TotpEnr  TotnComp   TotpComp
0        1        Edinburg  26668.0  14822.0  55.579721    5926.0  22.221389
1        2  Corpus Christi   7574.0   3919.0  51.742804    1484.0  19.593346
2        3        Victoria   3850.0   2047.0  53.168831     910.0  23.636364
3        4         Houston  73414.0  40109.0  54.633994   17037.0  23.206745
4        5        Beaumont   5979.0   3117.0  52.132464    1277.0  21.358087
..     ...             ...      ...      ...        ...       ...        ...
15      16        Amarillo   5719.0   3112.0  54.415108    1324.0  23.150901
16      17         Lubbock   5373.0   2773.0  51.609901    1199.0  22.315280
17      18         Midland   5361.0   2618.0  48.834173    1040.0  19.399366
18      19         El Paso  12771.0   7522.0  58.899068    2518.0  19.716545
19      20     San Antonio  27342.0  14330.0  52.410211    6217.0  22.737912

[20 rows x 7 columns]

Merge into one table, create additional variables, and finalize formatting.

In [7]:
#Combine into one table
All=pd.merge(AA, Hisp,on=['TEAReg', 'RegName']).copy()
All=pd.merge(All, Allmales,on=['TEAReg', 'RegName']).copy()
All=pd.merge(All, RegTotals,on=['TEAReg', 'RegName']).copy()
All=pd.merge(All, Econ,on=['TEAReg', 'RegName']).copy()

#Calculate % point differences for AA/Hisp/Males/Eco enrollmnet and completion rates from total cohort by region
All['AAEnrpDi']=All['AApEnr']-All['TotpEnr']
All['HisEnrpDi']=All['HispEnr']-All['TotpEnr']
All['MaleEnrpDi']=All['TotmpEnr']-All['TotpEnr'] #all males
All['EcoEnrpDi']=All['EcopEnr']-All['TotpEnr']
All['AAComppDi']=All['AApComp']-All['TotpComp']
All['HisComppDi']=All['HispComp']-All['TotpComp']
All['MaleCpDi']=All['TotmpComp']-All['TotpComp'] #all males
All['EcoComppDi']=All['EcopComp']-All['TotpComp']

Final=All

#Make perc of total for AA, Hisp, and Eco
Final['AApCoho']=100*All['AACoho']/All['TotCoho']
Final['HispCoho']=100*All['HisCoho']/All['TotCoho']
Final['EcopCoho']=100*All['EcoCoho']/All['TotCoho']

#Make variables with "_" suffix. They will have zero decmals and be used as symbol layers
Final['TotpEnr_']=Final['TotpEnr']
Final['TotpComp_']=Final['TotpComp'] 
Final['TotmpComp_']=Final['TotmpComp']
Final['AApComp_']=Final['AApComp']
Final['HispComp_']=Final['HispComp']
Final['EcopComp_']=Final['EcopComp']


Final['AApCoho_']=Final['AApCoho']
Final['HispCoho_']=Final['HispCoho']
Final['EcopCoho_']=Final['EcopCoho']
Final['AAComppD_']=Final['AAComppDi']
Final['HisComppD_']=Final['HisComppDi']
Final['EcoComppD_']=Final['EcoComppDi']
Final['AAEnrpD_']=Final['AAEnrpDi']
Final['HisEnrpD_']=Final['HisEnrpDi']
Final['EcoEnrpD_']=Final['EcoEnrpDi']
Final['MaleEnrpD_']=Final['MaleEnrpDi']
Final['MaleCpD_']=Final['MaleCpDi']


#set percentages to have just one decimal place
Processed = Final.round({'AApEnr': 1, 'AApComp': 1, 
             'HispEnr': 1, 'HispComp': 1, 
             'TotmpEnr': 1, 'TotmpComp': 1, 
             'TotpEnr': 1, 'TotpComp': 1, 
            'AAEnrpDi': 1,  'AAComppDi': 1,
            'HisEnrpDi': 1, 'HisComppDi': 1, 
             'AApCoho': 1, 'HispCoho': 1, 'EcopCoho':1,  
             'EcopEnr': 1, 'EcopComp': 1, 
            'EcoEnrpDi': 1, 'EcoComppDi': 1, 
            'MaleEnrpDi':1, 'MaleCpDi':1,
            'TotpEnr_':0, 'TotpComp_':0, 'TotmpComp_': 0,
            'AApComp_': 0, 'HispComp_': 0, 'EcopComp_': 0, 
            'AApCoho_':0, 'HispCoho_':0, 'EcopCoho_':0, 
            'AAComppD_':0, 'HisComppD_':0, 'EcoComppD_':0, 'MaleCpD_':0,
            'AAEnrpD_':0, 'HisEnrpD_':0, 'EcoEnrpD_':0, 'MaleEnrpD_':0}).copy()

Processed.to_csv('Data/ProcessedData.csv', index=False)
print(Processed)
    TEAReg         RegName   AACoho  AAnEnr  AAnComp  AApEnr  AApComp  \
0        1        Edinburg     65.0    42.0     20.0    64.6     30.8   
1        2  Corpus Christi    291.0   146.0     37.0    50.2     12.7   
2        3        Victoria    396.0   210.0     45.0    53.0     11.4   
3        4         Houston  16341.0  9148.0   2491.0    56.0     15.2   
4        5        Beaumont   1777.0   951.0    250.0    53.5     14.1   
..     ...             ...      ...     ...      ...     ...      ...   
15      16        Amarillo    323.0   173.0     35.0    53.6     10.8   
16      17         Lubbock    433.0   201.0     47.0    46.4     10.9   
17      18         Midland    244.0   108.0     35.0    44.3     14.3   
18      19         El Paso    384.0   166.0     48.0    43.2     12.5   
19      20     San Antonio   2110.0  1089.0    371.0    51.6     17.6   

    HisCoho  HisnEnr  HisnComp    ...     HispCoho_  EcopCoho_  AAComppD_  \
0   25765.0  14238.0    5617.0    ...          97.0       85.0        9.0   
1    5124.0   2501.0     840.0    ...          68.0       57.0       -7.0   
2    1810.0    748.0     240.0    ...          47.0       51.0      -12.0   
3   30602.0  13570.0    4935.0    ...          42.0       50.0       -8.0   
4     660.0    265.0     136.0    ...          11.0       49.0       -7.0   
..      ...      ...       ...    ...           ...        ...        ...   
15   2177.0    964.0     334.0    ...          38.0       51.0      -12.0   
16   2691.0   1132.0     355.0    ...          50.0       56.0      -11.0   
17   3142.0   1389.0     466.0    ...          59.0       46.0       -5.0   
18  11246.0   6737.0    2193.0    ...          88.0       74.0       -7.0   
19  17903.0   8731.0    3266.0    ...          65.0       58.0       -5.0   

    HisComppD_  EcoComppD_  AAEnrpD_  HisEnrpD_  EcoEnrpD_  MaleEnrpD_  \
0         -0.0        -3.0       9.0       -0.0       -3.0        -4.0   
1         -3.0        -9.0      -2.0       -3.0      -10.0        -5.0   
2        -10.0       -13.0      -0.0      -12.0      -15.0        -6.0   
3         -7.0       -10.0       1.0      -10.0      -11.0        -5.0   
4         -1.0       -10.0       1.0      -12.0      -11.0        -5.0   
..         ...         ...       ...        ...        ...         ...   
15        -8.0       -10.0      -1.0      -10.0      -11.0        -6.0   
16        -9.0       -12.0      -5.0      -10.0      -13.0        -5.0   
17        -5.0        -9.0      -5.0       -5.0      -12.0        -8.0   
18        -0.0        -3.0     -16.0        1.0       -3.0        -4.0   
19        -4.0        -9.0      -1.0       -4.0      -10.0        -5.0   

    MaleCpD_  
0       -4.0  
1       -3.0  
2       -3.0  
3       -4.0  
4       -3.0  
..       ...  
15      -4.0  
16      -4.0  
17      -5.0  
18      -4.0  
19      -4.0  

[20 rows x 55 columns]

Now split our prepared data into two datasets. One for points and one for polygons

The Polygon data will have all the variables except for the percentages rounded to zero decimals. The Point data will just have the rounded off percentages.

In [8]:
#Polygon data
ProcessedPolys=Processed.iloc[:,0:38].copy() 
ProcessedPolys.to_csv('Data/ProcessedPolys.csv', index=False)

#Point Data
ProcessedPoints=Processed.iloc[:,numpy.r_[0:1,38:55]].copy()
ProcessedPoints.to_csv('Data/ProcessedPoints.csv', index=False)

The rest of the code prepares the shapefiles for mapping.

But first, download and save the Education Service Center region (TEA region) shapefiles:

In [9]:
#copy shapefiles to geodatabase
arcpy.FeatureClassToGeodatabase_conversion('Data/rawESC_Regions/ESC_Regions.shp', 'Data/Cohort.gdb')

#List fields in dataset
fields = arcpy.ListFields('Data/Cohort.gdb/ESC_Regions')

for field in fields:
    print("{0} is a type of {1} with a length of {2}"
          .format(field.name, field.type, field.length))
OBJECTID_1 is a type of OID with a length of 4
Shape is a type of Geometry with a length of 0
FID_1 is a type of Integer with a length of 4
OBJECTID is a type of Integer with a length of 4
CITY is a type of String with a length of 80
REGION is a type of String with a length of 80
ORG_E_ID is a type of Integer with a length of 4
WEBSITE is a type of String with a length of 80
SHAPE_Leng is a type of Double with a length of 8
Shape_Length is a type of Double with a length of 8
Shape_Area is a type of Double with a length of 8
In [10]:
#Delete unnecessary fields
arcpy.DeleteField_management("Data/Cohort.gdb/ESC_Regions", ["FID_1", "OBJECTID", "CITY", 'REGION', 'ORG_E_ID', 'WEBSITE', 'SHAPE_Leng'])                            

#Create Texas Outline by dissolving the TEA region polygons
arcpy.Dissolve_management("Data/Cohort.gdb/ESC_Regions","Data/Cohort.gdb/TexasOutline")

#Add datasets to GeoDataBase
arcpy.TableToTable_conversion('Data/ProcessedPolys.csv', 'Data/Cohort.gdb', 'PolygonData')
arcpy.TableToTable_conversion('Data/ProcessedPoints.csv', 'Data/Cohort.gdb', 'PointData')

#Merge Cohort Data to TEA Region Polygons
arcpy.JoinField_management('Data/Cohort.gdb/ESC_Regions', 'OBJECTID_1','Data/Cohort.gdb/PolygonData', 'TEAReg')
Out[10]:
<Result 'Data/Cohort.gdb/ESC_Regions'>
In [11]:
#Make folder if it doesn't exist
if not os.path.exists('Data/FinalShapefiles'):
    os.makedirs('Data/FinalShapefiles')
    
#Export merged TEARegions with Cohort data to shapefile
arcpy.FeatureClassToShapefile_conversion ('Data/Cohort.gdb/ESC_Regions', 'Data/FinalShapefiles')
Out[11]:
<Result 'Data\\FinalShapefiles'>

Now make the centrids for the TEA Regions

(Requires the advanced license)

In [12]:
#  Set local variables
inFeatures = 'Data/rawESC_Regions/ESC_Regions.shp'
outFeatureClass = "Data/Cohort.gdb/ESC_Points"

# Use FeatureToPoint function to find a point inside each park
arcpy.FeatureToPoint_management(inFeatures, outFeatureClass)

#Merge Cohort Data to TEA Region Polygons
arcpy.JoinField_management('Data/Cohort.gdb/ESC_Points', 'OBJECTID_1','Data/Cohort.gdb/PointData', 'TEAReg')
Out[12]:
<Result 'Data/Cohort.gdb/ESC_Points'>
In [13]:
#Export merged TEARegion Points to shapefile
arcpy.FeatureClassToShapefile_conversion ('Data/Cohort.gdb/ESC_Points', 'Data/FinalShapefiles')
Out[13]:
<Result 'Data\\FinalShapefiles'>

Make a mask around the state of Texas

In [14]:
# A list coordinate pairs
feature_info = [[[-140, 15], [-60, 15], [-60, 45], [-140, 45], [-140, 15]]]

# A list that will hold each of the Polygon objects
features = []

for feature in feature_info:
    # Create a Polygon object based on the array of points
    features.append(
        arcpy.Polygon(
            arcpy.Array([arcpy.Point(*coords) for coords in feature])))

# Persist a copy of the Polyline objects using CopyFeatures
arcpy.CopyFeatures_management(features, "Data/Cohort.gdb/TempMask")
Out[14]:
<Result 'Data\\Cohort.gdb\\TempMask'>
In [15]:
# Carve out the shape of Texas for the mask
arcpy.Erase_analysis("Data/Cohort.gdb/TempMask", "Data/Cohort.gdb/TexasOutline", "Data/FinalShapefiles/TexasMask.shp")
Out[15]:
<Result 'Data\\FinalShapefiles\\TexasMask.shp'>

Now, go to linux and use GDAL to convert shapefiles to geojson. Then use the Tippecanoe tool to make .MBtiles

I used the following commands:

ogr2ogr -f GeoJSON Data/CohortTEARegionPolys.json Data/FinalShapefiles/ESC_Regions.shp -progress

ogr2ogr -f GeoJSON Data/TexasOutline.json Data/FinalShapefiles/TexasOutline.shp -progress

ogr2ogr -f GeoJSON Data/CohortTEARegionPoints.json Data/FinalShapefiles/ESC_Points.shp -progress

ogr2ogr -f GeoJSON Data/TexasMask.json Data/FinalShapefiles/TexasMask.shp -progress

tippecanoe --output=8thGradeCohort2008TEARegionData.mbtiles Data/CohortTEARegionPoints.json Data/CohortTEARegionPolys.json Data/TexasMask.json -r1 --drop-fraction-as-needed --simplification=9 --maximum-zoom=8 --minimum-zoom=3 --exclude=OBJECTID_1 --detect-shared-borders

Finally, we uploaded the custom .MBtiles to mapbox studio and served them from there. You could also set up your own vector tile server using TileServer-GL.