MapGIS k9 如何获取瓦片数据

MapGIS k9 如何获取瓦片数据,第1张

瓦片数据时矢量数据裁剪的结果,瓦片数据比矢量数据响应速度要快很多,所以在webGIs中使用瓦片数据比较多。在做瓦片数据的时候请先准备好自己的矢量数据。在地图编辑器下使用工具栏右键,选择页面缓冲工具

点击确定后,就可以在你保存目录喜爱生成瓦片数据HDF格式。但是要注意和数据库HDF区分

对于数组和object对象的数据获取,react是比较容易获取的,但是前段时间开发过程中遇到过map集合的数据,数据格式如下:

当然这是简化后的数据。

对于这种对象的获取其实直接通过listactionscreate和listactionsdelete是可以获取到的,但是我们从后台获取到数据后并不能确定map中的key值,以及map的个数。

1对于这种map的 *** 作我们是可以获取到所有actions中的key值的

方法返回的是一个数组

但是直接遍历这个集合,通过获取到的key来取对应的value值是获取不到的,返回udefiend

2有一种更简单的方法我们可以直接得到map中的value

返回的数据如下:

最终把value转换成了数组,就可以遍历取数据了

就是前端提交到Servlet或者Action里面的参数Map哈,如果你是以表单提交,那么requestgetParameterMap()中将包含你表单里面所有input标签的数据,以其name为key,以其value为值,如果你是以ajax提交的话,就是你自己组织的所有参数了

写了一个简单的例子,希望能帮你提供下思路。

Map

map=new

HashMap

();

mapput("1",

"11111");

mapput("2",

"22222");

mapput("3",

"33333");

下面的方法可以写成一个公用的方法。遍历后将截取的原放回map中,返回即可达到你想要的结果。方法可以带两个参数,一个是需要遍历的Map,另一个是key,如果key为null则遍历map,截取所有对象指定的属性。如果key有值则截取key对应的值。

for(Entry

entry:mapentrySet()){

mapput(entrygetKey(),entrygetValue()substring(0,

entrygetValue()length()));//map中key相同时,后者会覆盖前者的value

Systemoutprintln("key:"+entrygetKey()+";value:"+entrygetValue());

}

一个输入切片(split)由单个map *** 作来处理。默认的使用的实现类为 FileInputFormat,这个类继承了InputFormat(有两个重要的方法:getSplits,createRecordReader)

public abstract class InputFormat<K, V> {

public abstract List<InputSplit> getSplits(JobContext context ) throws IOException, InterruptedException;

public abstract RecordReader<K,V> createRecordReader(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException;

}

运行作业的客户端通过调用getSplits()计算切片,然后将他们发送到ResourceManager(以前老版本叫jobtracker),ResourceManager 使用其存储位置信息来调度map任务从而在 NodeManager(tasktracker)上处理这些分片数据。

在NodeManager上,map任务把输入分片传给InputFormat的createRecordReader()方法来获取这个分片的RecordReadermap任务用一个RecordReader来生成记录的键值对,然后传递给Mapper类中的map函数。这里调用了Mapper的run()方法

public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> {

/

The <code>Context</code> passed on to the {@link Mapper} implementations

/

public abstract class Context

implements MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT> {

}

/

Called once at the beginning of the task

/

protected void setup(Context context

) throws IOException, InterruptedException {

// NOTHING

}

/

Called once for each key/value pair in the input split Most applications

should override this, but the default is the identity function

/

@SuppressWarnings("unchecked")

protected void map(KEYIN key, VALUEIN value,

Context context) throws IOException, InterruptedException {

contextwrite((KEYOUT) key, (VALUEOUT) value);

}

/

Called once at the end of the task

/

protected void cleanup(Context context

) throws IOException, InterruptedException {

// NOTHING

}

/

Expert users can override this method for more complete control over the

execution of the Mapper

@param context

@throws IOException

/

`public void run`(Context context) throws IOException, InterruptedException {

setup(context);

while (contextnextKeyValue()) {

map(contextgetCurrentKey(), contextgetCurrentValue(), context);

}

cleanup(context);

}

}

运行setup()之后,再重复调用Context上的nextKeyValue()委托给RecordRader的同名函数实现来为map产生key和value对象。通过Context,key,value从RecordReader中重新取出传递给map()通过调用reader读到stream的结尾时,nextKeyValue()方法返回false,map任务运行cleanup()方法,然后结束。(实则调用的是RecordReader的子类LineRecordReader)

1

1

public class LineRecordReader extends RecordReader<LongWritable, Text> {

private static final Log LOG = LogFactorygetLog(LineRecordReaderclass);

private CompressionCodecFactory compressionCodecs = null;

private long start;

private long pos;

private long end;

private LineReader in;

private int maxLineLength;

private LongWritable key = null;

private Text value = null;

private Seekable filePosition;

private CompressionCodec codec;

private Decompressor decompressor;

private byte[] recordDelimiterBytes = null;

public LineRecordReader() {

}

public LineRecordReader(byte[] recordDelimiter) {

thisrecordDelimiterBytes = recordDelimiter;

}

public void initialize(InputSplit genericSplit,

TaskAttemptContext context) throws IOException {

FileSplit split = (FileSplit) genericSplit;

Configuration job = contextgetConfiguration();

thismaxLineLength = jobgetInt("mapredlinerecordreadermaxlength",

IntegerMAX_VALUE);

start = splitgetStart();

end = start + splitgetLength();

final Path file = splitgetPath();

compressionCodecs = new CompressionCodecFactory(job);

codec = compressionCodecsgetCodec(file);

// open the file and seek to the start of the split

FileSystem fs = filegetFileSystem(job);

FSDataInputStream fileIn = fsopen(splitgetPath());

if (isCompressedInput()) {

decompressor = CodecPoolgetDecompressor(codec);

if (codec instanceof SplittableCompressionCodec) {

final SplitCompressionInputStream cIn =

((SplittableCompressionCodec)codec)createInputStream(

fileIn, decompressor, start, end,

SplittableCompressionCodecREAD_MODEBYBLOCK);

in = new LineReader(cIn, job, recordDelimiterBytes);

start = cIngetAdjustedStart();

end = cIngetAdjustedEnd();

filePosition = cIn;

} else {

in = new LineReader(codeccreateInputStream(fileIn, decompressor), job,

recordDelimiterBytes);

filePosition = fileIn;

}

} else {

fileInseek(start);

in = new LineReader(fileIn, job, recordDelimiterBytes);

filePosition = fileIn;

}

// If this is not the first split, we always throw away first record

// because we always (except the last split) read one extra line in

// next() method

if (start != 0) {

start += inreadLine(new Text(), 0, maxBytesToConsume(start));

}

thispos = start;

}

private boolean isCompressedInput() {

return (codec != null);

}

private int maxBytesToConsume(long pos) {

return isCompressedInput()

IntegerMAX_VALUE

: (int) Mathmax(Mathmin(IntegerMAX_VALUE, end - pos), maxLineLength);

}

private long getFilePosition() throws IOException {

long retVal;

if (isCompressedInput() && null != filePosition) {

retVal = filePositiongetPos();

} else {

retVal = pos;

}

return retVal;

}

private int skipUtfByteOrderMark() throws IOException {

// Strip BOM(Byte Order Mark)

// Text only support UTF-8, we only need to check UTF-8 BOM

// (0xEF,0xBB,0xBF) at the start of the text stream

int newMaxLineLength = (int) Mathmin(3L + (long) maxLineLength,

IntegerMAX_VALUE);

int newSize = inreadLine(value, newMaxLineLength, maxBytesToConsume(pos));

// Even we read 3 extra bytes for the first line,

// we won't alter existing behavior (no backwards incompat issue)

// Because the newSize is less than maxLineLength and

// the number of bytes copied to Text is always no more than newSize

// If the return size from readLine is not less than maxLineLength,

// we will discard the current line and read the next line

pos += newSize;

int textLength = valuegetLength();

byte[] textBytes = valuegetBytes();

if ((textLength >= 3) && (textBytes[0] == (byte)0xEF) &&

(textBytes[1] == (byte)0xBB) && (textBytes[2] == (byte)0xBF)) {

// find UTF-8 BOM, strip it

LOGinfo("Found UTF-8 BOM and skipped it");

textLength -= 3;

newSize -= 3;

if (textLength > 0) {

// It may work to use the same buffer and not do the copyBytes

textBytes = valuecopyBytes();

valueset(textBytes, 3, textLength);

} else {

valueclear();

}

}

return newSize;

}

public boolean nextKeyValue() throws IOException {

if (key == null) {

key = new LongWritable();

}

keyset(pos);

if (value == null) {

value = new Text();

}

int newSize = 0;

// We always read one extra line, which lies outside the upper

// split limit ie (end - 1)

while (getFilePosition() <= end) {

if (pos == 0) {

newSize = skipUtfByteOrderMark();

} else {

newSize = inreadLine(value, maxLineLength, maxBytesToConsume(pos));

pos += newSize;

}

if ((newSize == 0) || (newSize < maxLineLength)) {

break;

}

// line too long try again

LOGinfo("Skipped line of size " + newSize + " at pos " +

(pos - newSize));

}

if (newSize == 0) {

key = null;

value = null;

return false;

} else {

return true;

}

}

@Override

public LongWritable getCurrentKey() {

return key;

}

@Override

public Text getCurrentValue() {

return value;

}

/

Get the progress within the split

/

public float getProgress() {

if (start == end) {

return 00f;

} else {

try {

return Mathmin(10f, (getFilePosition() - start)

/ (float) (end - start));

} catch (IOException ioe) {

throw new RuntimeException(ioe);

}

}

}

public synchronized void close() throws IOException {

try {

if (in != null) {

inclose();

}

} finally {

if (decompressor != null) {

CodecPoolreturnDecompressor(decompressor);

}

}

}

以上就是关于MapGIS k9 如何获取瓦片数据全部的内容,包括:MapGIS k9 如何获取瓦片数据、react处理object中的map集合、getParameterMap方法无法正常获取表单提交的数据等相关内容解答,如果想了解更多相关内容,可以关注我们,你们的支持是我们更新的动力!

欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/web/9380893.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2023-04-27
下一篇2023-04-27

发表评论

登录后才能评论

评论列表(0条)

    保存